6,956 research outputs found
Investigating Echo-State Networks Dynamics by Means of Recurrence Analysis
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.In this paper, we elaborate over the well-known interpretability issue in echo-state networks (ESNs). The idea is to investigate the dynamics of reservoir neurons with time-series analysis techniques developed in complex systems research. Notably, we analyze time series of neuron activations with recurrence plots (RPs) and recurrence quantification analysis (RQA), which permit to visualize and characterize high-dimensional dynamical systems. We show that this approach is useful in a number of ways. First, the 2-D representation offered by RPs provides a visualization of the high-dimensional reservoir dynamics. Our results suggest that, if the network is stable, reservoir and input generate similar line patterns in the respective RPs. Conversely, as the ESN becomes unstable, the patterns in the RP of the reservoir change. As a second result, we show that an RQA measure, called Lmax, is highly correlated with the well-established maximal local Lyapunov exponent. This suggests that complexity measures based on RP diagonal lines distribution can quantify network stability. Finally, our analysis shows that all RQA measures fluctuate on the proximity of the so-called edge of stability, where an ESN typically achieves maximum computational capability. We leverage on this property to determine the edge of stability and show that our criterion is more accurate than two well-known counterparts, both based on the Jacobian matrix of the reservoir. Therefore, we claim that RPs and RQA-based analyses are valuable tools to design an ESN, given a specific problem
Investigating Echo-State Networks Dynamics by Means of Recurrence Analysis
In this paper, we elaborate over the well-known interpretability issue in echo-state networks (ESNs). The idea is to investigate the dynamics of reservoir neurons with time-series analysis techniques developed in complex systems research. Notably, we analyze time series of neuron activations with recurrence plots (RPs) and recurrence quantification analysis (RQA), which permit to visualize and characterize high-dimensional dynamical systems. We show that this approach is useful in a number of ways. First, the 2-D representation offered by RPs provides a visualization of the high-dimensional reservoir dynamics. Our results suggest that, if the network is stable, reservoir and input generate similar line patterns in the respective RPs. Conversely, as the ESN becomes unstable, the patterns in the RP of the reservoir change. As a second result, we show that an RQA measure, called Lmax, is highly correlated with the well-established maximal local Lyapunov exponent. This suggests that complexity measures based on RP diagonal lines distribution can quantify network stability. Finally, our analysis shows that all RQA measures fluctuate on the proximity of the so-called edge of stability, where an ESN typically achieves maximum computational capability. We leverage on this property to determine the edge of stability and show that our criterion is more accurate than two well-known counterparts, both based on the Jacobian matrix of the reservoir. Therefore, we claim that RPs and RQA-based analyses are valuable tools to design an ESN, given a specific problem
Training Echo State Networks with Regularization through Dimensionality Reduction
In this paper we introduce a new framework to train an Echo State Network to
predict real valued time-series. The method consists in projecting the output
of the internal layer of the network on a space with lower dimensionality,
before training the output layer to learn the target task. Notably, we enforce
a regularization constraint that leads to better generalization capabilities.
We evaluate the performances of our approach on several benchmark tests, using
different techniques to train the readout of the network, achieving superior
predictive performance when using the proposed framework. Finally, we provide
an insight on the effectiveness of the implemented mechanics through a
visualization of the trajectory in the phase space and relying on the
methodologies of nonlinear time-series analysis. By applying our method on well
known chaotic systems, we provide evidence that the lower dimensional embedding
retains the dynamical properties of the underlying system better than the
full-dimensional internal states of the network
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
Bidirectional deep-readout echo state networks
We propose a deep architecture for the classification of multivariate time
series. By means of a recurrent and untrained reservoir we generate a vectorial
representation that embeds temporal relationships in the data. To improve the
memorization capability, we implement a bidirectional reservoir, whose last
state captures also past dependencies in the input. We apply dimensionality
reduction to the final reservoir states to obtain compressed fixed size
representations of the time series. These are subsequently fed into a deep
feedforward network trained to perform the final classification. We test our
architecture on benchmark datasets and on a real-world use-case of blood
samples classification. Results show that our method performs better than a
standard echo state network and, at the same time, achieves results comparable
to a fully-trained recurrent network, but with a faster training
Biological computation through recurrence
One of the defining features of living systems is their adaptability to
changing environmental conditions. This requires organisms to extract temporal
and spatial features of their environment, and use that information to compute
the appropriate response. In the last two decades, a growing body or work,
mainly coming from the machine learning and computational neuroscience fields,
has shown that such complex information processing can be performed by
recurrent networks. In those networks, temporal computations emerge from the
interaction between incoming stimuli and the internal dynamic state of the
network. In this article we review our current understanding of how recurrent
networks can be used by biological systems, from cells to brains, for complex
information processing. Rather than focusing on sophisticated, artificial
recurrent architectures such as long short-term memory (LSTM) networks, here we
concentrate on simpler network structures and learning algorithms that can be
expected to have been found by evolution. We also review studies showing
evidence of naturally occurring recurrent networks in living organisms. Lastly,
we discuss some relevant evolutionary aspects concerning the emergence of this
natural computation paradigm.Comment: 19 pages, 3 figure
- …