1,917 research outputs found
Training Echo State Networks with Regularization through Dimensionality Reduction
In this paper we introduce a new framework to train an Echo State Network to
predict real valued time-series. The method consists in projecting the output
of the internal layer of the network on a space with lower dimensionality,
before training the output layer to learn the target task. Notably, we enforce
a regularization constraint that leads to better generalization capabilities.
We evaluate the performances of our approach on several benchmark tests, using
different techniques to train the readout of the network, achieving superior
predictive performance when using the proposed framework. Finally, we provide
an insight on the effectiveness of the implemented mechanics through a
visualization of the trajectory in the phase space and relying on the
methodologies of nonlinear time-series analysis. By applying our method on well
known chaotic systems, we provide evidence that the lower dimensional embedding
retains the dynamical properties of the underlying system better than the
full-dimensional internal states of the network
Bidirectional deep-readout echo state networks
We propose a deep architecture for the classification of multivariate time
series. By means of a recurrent and untrained reservoir we generate a vectorial
representation that embeds temporal relationships in the data. To improve the
memorization capability, we implement a bidirectional reservoir, whose last
state captures also past dependencies in the input. We apply dimensionality
reduction to the final reservoir states to obtain compressed fixed size
representations of the time series. These are subsequently fed into a deep
feedforward network trained to perform the final classification. We test our
architecture on benchmark datasets and on a real-world use-case of blood
samples classification. Results show that our method performs better than a
standard echo state network and, at the same time, achieves results comparable
to a fully-trained recurrent network, but with a faster training
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
Empirical Analysis of the Necessary and Sufficient Conditions of the Echo State Property
The Echo State Network (ESN) is a specific recurrent network, which has
gained popularity during the last years. The model has a recurrent network
named reservoir, that is fixed during the learning process. The reservoir is
used for transforming the input space in a larger space. A fundamental property
that provokes an impact on the model accuracy is the Echo State Property (ESP).
There are two main theoretical results related to the ESP. First, a sufficient
condition for the ESP existence that involves the singular values of the
reservoir matrix. Second, a necessary condition for the ESP. The ESP can be
violated according to the spectral radius value of the reservoir matrix. There
is a theoretical gap between these necessary and sufficient conditions. This
article presents an empirical analysis of the accuracy and the projections of
reservoirs that satisfy this theoretical gap. It gives some insights about the
generation of the reservoir matrix. From previous works, it is already known
that the optimal accuracy is obtained near to the border of stability control
of the dynamics. Then, according to our empirical results, we can see that this
border seems to be closer to the sufficient conditions than to the necessary
conditions of the ESP.Comment: 23 pages, 14 figures, accepted paper for the IEEE IJCNN, 201
Reservoir computing approaches for representation and classification of multivariate time series
Classification of multivariate time series (MTS) has been tackled with a
large variety of methodologies and applied to a wide range of scenarios.
Reservoir Computing (RC) provides efficient tools to generate a vectorial,
fixed-size representation of the MTS that can be further processed by standard
classifiers. Despite their unrivaled training speed, MTS classifiers based on a
standard RC architecture fail to achieve the same accuracy of fully trainable
neural networks. In this paper we introduce the reservoir model space, an
unsupervised approach based on RC to learn vectorial representations of MTS.
Each MTS is encoded within the parameters of a linear model trained to predict
a low-dimensional embedding of the reservoir dynamics. Compared to other RC
methods, our model space yields better representations and attains comparable
computational performance, thanks to an intermediate dimensionality reduction
procedure. As a second contribution we propose a modular RC framework for MTS
classification, with an associated open-source Python library. The framework
provides different modules to seamlessly implement advanced RC architectures.
The architectures are compared to other MTS classifiers, including deep
learning models and time series kernels. Results obtained on benchmark and
real-world MTS datasets show that RC classifiers are dramatically faster and,
when implemented using our proposed representation, also achieve superior
classification accuracy
- …