12,499 research outputs found
A Comparative Study of Reservoir Computing for Temporal Signal Processing
Reservoir computing (RC) is a novel approach to time series prediction using
recurrent neural networks. In RC, an input signal perturbs the intrinsic
dynamics of a medium called a reservoir. A readout layer is then trained to
reconstruct a target output from the reservoir's state. The multitude of RC
architectures and evaluation metrics poses a challenge to both practitioners
and theorists who study the task-solving performance and computational power of
RC. In addition, in contrast to traditional computation models, the reservoir
is a dynamical system in which computation and memory are inseparable, and
therefore hard to analyze. Here, we compare echo state networks (ESN), a
popular RC architecture, with tapped-delay lines (DL) and nonlinear
autoregressive exogenous (NARX) networks, which we use to model systems with
limited computation and limited memory respectively. We compare the performance
of the three systems while computing three common benchmark time series:
H{\'e}non Map, NARMA10, and NARMA20. We find that the role of the reservoir in
the reservoir computing paradigm goes beyond providing a memory of the past
inputs. The DL and the NARX network have higher memorization capability, but
fall short of the generalization power of the ESN
Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons
Echo state networks (ESN), a type of reservoir computing (RC) architecture,
are efficient and accurate artificial neural systems for time series processing
and learning. An ESN consists of a core of recurrent neural networks, called a
reservoir, with a small number of tunable parameters to generate a
high-dimensional representation of an input, and a readout layer which is
easily trained using regression to produce a desired output from the reservoir
states. Certain computational tasks involve real-time calculation of high-order
time correlations, which requires nonlinear transformation either in the
reservoir or the readout layer. Traditional ESN employs a reservoir with
sigmoid or tanh function neurons. In contrast, some types of biological neurons
obey response curves that can be described as a product unit rather than a sum
and threshold. Inspired by this class of neurons, we introduce a RC
architecture with a reservoir of product nodes for time series computation. We
find that the product RC shows many properties of standard ESN such as
short-term memory and nonlinear capacity. On standard benchmarks for chaotic
prediction tasks, the product RC maintains the performance of a standard
nonlinear ESN while being more amenable to mathematical analysis. Our study
provides evidence that such networks are powerful in highly nonlinear tasks
owing to high-order statistics generated by the recurrent product node
reservoir
Reservoir Computing Approach to Robust Computation using Unreliable Nanoscale Networks
As we approach the physical limits of CMOS technology, advances in materials
science and nanotechnology are making available a variety of unconventional
computing substrates that can potentially replace top-down-designed
silicon-based computing devices. Inherent stochasticity in the fabrication
process and nanometer scale of these substrates inevitably lead to design
variations, defects, faults, and noise in the resulting devices. A key
challenge is how to harness such devices to perform robust computation. We
propose reservoir computing as a solution. In reservoir computing, computation
takes place by translating the dynamics of an excited medium, called a
reservoir, into a desired output. This approach eliminates the need for
external control and redundancy, and the programming is done using a
closed-form regression problem on the output, which also allows concurrent
programming using a single device. Using a theoretical model, we show that both
regular and irregular reservoirs are intrinsically robust to structural noise
as they perform computation
Recurrent kernel machines : computing with infinite echo state networks
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks
Training Echo State Networks with Regularization through Dimensionality Reduction
In this paper we introduce a new framework to train an Echo State Network to
predict real valued time-series. The method consists in projecting the output
of the internal layer of the network on a space with lower dimensionality,
before training the output layer to learn the target task. Notably, we enforce
a regularization constraint that leads to better generalization capabilities.
We evaluate the performances of our approach on several benchmark tests, using
different techniques to train the readout of the network, achieving superior
predictive performance when using the proposed framework. Finally, we provide
an insight on the effectiveness of the implemented mechanics through a
visualization of the trajectory in the phase space and relying on the
methodologies of nonlinear time-series analysis. By applying our method on well
known chaotic systems, we provide evidence that the lower dimensional embedding
retains the dynamical properties of the underlying system better than the
full-dimensional internal states of the network
- …