13,816 research outputs found

    The Power of Linear Recurrent Neural Networks

    Full text link
    Recurrent neural networks are a powerful means to cope with time series. We show how a type of linearly activated recurrent neural networks, which we call predictive neural networks, can approximate any time-dependent function f(t) given by a number of function values. The approximation can effectively be learned by simply solving a linear equation system; no backpropagation or similar methods are needed. Furthermore, the network size can be reduced by taking only most relevant components. Thus, in contrast to others, our approach not only learns network weights but also the network architecture. The networks have interesting properties: They end up in ellipse trajectories in the long run and allow the prediction of further values and compact representations of functions. We demonstrate this by several experiments, among them multiple superimposed oscillators (MSO), robotic soccer, and predicting stock prices. Predictive neural networks outperform the previous state-of-the-art for the MSO task with a minimal number of units.Comment: 22 pages, 14 figures and tables, revised implementatio

    Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons

    Full text link
    Echo state networks (ESN), a type of reservoir computing (RC) architecture, are efficient and accurate artificial neural systems for time series processing and learning. An ESN consists of a core of recurrent neural networks, called a reservoir, with a small number of tunable parameters to generate a high-dimensional representation of an input, and a readout layer which is easily trained using regression to produce a desired output from the reservoir states. Certain computational tasks involve real-time calculation of high-order time correlations, which requires nonlinear transformation either in the reservoir or the readout layer. Traditional ESN employs a reservoir with sigmoid or tanh function neurons. In contrast, some types of biological neurons obey response curves that can be described as a product unit rather than a sum and threshold. Inspired by this class of neurons, we introduce a RC architecture with a reservoir of product nodes for time series computation. We find that the product RC shows many properties of standard ESN such as short-term memory and nonlinear capacity. On standard benchmarks for chaotic prediction tasks, the product RC maintains the performance of a standard nonlinear ESN while being more amenable to mathematical analysis. Our study provides evidence that such networks are powerful in highly nonlinear tasks owing to high-order statistics generated by the recurrent product node reservoir

    Using a novel source-localized phase regressor technique for evaluation of the vascular contribution to semantic category area localization in BOLD fMRI.

    Get PDF
    Numerous studies have shown that gradient-echo blood oxygen level dependent (BOLD) fMRI is biased toward large draining veins. However, the impact of this large vein bias on the localization and characterization of semantic category areas has not been examined. Here we address this issue by comparing standard magnitude measures of BOLD activity in the Fusiform Face Area (FFA) and Parahippocampal Place Area (PPA) to those obtained using a novel method that suppresses the contribution of large draining veins: source-localized phase regressor (sPR). Unlike previous suppression methods that utilize the phase component of the BOLD signal, sPR yields robust and unbiased suppression of large draining veins even in voxels with no task-related phase changes. This is confirmed in ideal simulated data as well as in FFA/PPA localization data from four subjects. It was found that approximately 38% of right PPA, 14% of left PPA, 16% of right FFA, and 6% of left FFA voxels predominantly reflect signal from large draining veins. Surprisingly, with the contributions from large veins suppressed, semantic category representation in PPA actually tends to be lateralized to the left rather than the right hemisphere. Furthermore, semantic category areas larger in volume and higher in fSNR were found to have more contributions from large veins. These results suggest that previous studies using gradient-echo BOLD fMRI were biased toward semantic category areas that receive relatively greater contributions from large veins

    Widely Linear Kernels for Complex-Valued Kernel Activation Functions

    Full text link
    Complex-valued neural networks (CVNNs) have been shown to be powerful nonlinear approximators when the input data can be properly modeled in the complex domain. One of the major challenges in scaling up CVNNs in practice is the design of complex activation functions. Recently, we proposed a novel framework for learning these activation functions neuron-wise in a data-dependent fashion, based on a cheap one-dimensional kernel expansion and the idea of kernel activation functions (KAFs). In this paper we argue that, despite its flexibility, this framework is still limited in the class of functions that can be modeled in the complex domain. We leverage the idea of widely linear complex kernels to extend the formulation, allowing for a richer expressiveness without an increase in the number of adaptable parameters. We test the resulting model on a set of complex-valued image classification benchmarks. Experimental results show that the resulting CVNNs can achieve higher accuracy while at the same time converging faster.Comment: Accepted at ICASSP 201
    • …
    corecore