5,281 research outputs found
Using Echo State Networks for Cryptography
Echo state networks are simple recurrent neural networks that are easy to
implement and train. Despite their simplicity, they show a form of memory and
can predict or regenerate sequences of data. We make use of this property to
realize a novel neural cryptography scheme. The key idea is to assume that
Alice and Bob share a copy of an echo state network. If Alice trains her copy
to memorize a message, she can communicate the trained part of the network to
Bob who plugs it into his copy to regenerate the message. Considering a
byte-level representation of in- and output, the technique applies to arbitrary
types of data (texts, images, audio files, etc.) and practical experiments
reveal it to satisfy the fundamental cryptographic properties of diffusion and
confusion.Comment: 8 pages, ICANN 201
Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons
Echo state networks (ESN), a type of reservoir computing (RC) architecture,
are efficient and accurate artificial neural systems for time series processing
and learning. An ESN consists of a core of recurrent neural networks, called a
reservoir, with a small number of tunable parameters to generate a
high-dimensional representation of an input, and a readout layer which is
easily trained using regression to produce a desired output from the reservoir
states. Certain computational tasks involve real-time calculation of high-order
time correlations, which requires nonlinear transformation either in the
reservoir or the readout layer. Traditional ESN employs a reservoir with
sigmoid or tanh function neurons. In contrast, some types of biological neurons
obey response curves that can be described as a product unit rather than a sum
and threshold. Inspired by this class of neurons, we introduce a RC
architecture with a reservoir of product nodes for time series computation. We
find that the product RC shows many properties of standard ESN such as
short-term memory and nonlinear capacity. On standard benchmarks for chaotic
prediction tasks, the product RC maintains the performance of a standard
nonlinear ESN while being more amenable to mathematical analysis. Our study
provides evidence that such networks are powerful in highly nonlinear tasks
owing to high-order statistics generated by the recurrent product node
reservoir
Echo State Networks with Self-Normalizing Activations on the Hyper-Sphere
Among the various architectures of Recurrent Neural Networks, Echo State
Networks (ESNs) emerged due to their simplified and inexpensive training
procedure. These networks are known to be sensitive to the setting of
hyper-parameters, which critically affect their behaviour. Results show that
their performance is usually maximized in a narrow region of hyper-parameter
space called edge of chaos. Finding such a region requires searching in
hyper-parameter space in a sensible way: hyper-parameter configurations
marginally outside such a region might yield networks exhibiting fully
developed chaos, hence producing unreliable computations. The performance gain
due to optimizing hyper-parameters can be studied by considering the
memory--nonlinearity trade-off, i.e., the fact that increasing the nonlinear
behavior of the network degrades its ability to remember past inputs, and
vice-versa. In this paper, we propose a model of ESNs that eliminates critical
dependence on hyper-parameters, resulting in networks that provably cannot
enter a chaotic regime and, at the same time, denotes nonlinear behaviour in
phase space characterised by a large memory of past inputs, comparable to the
one of linear networks. Our contribution is supported by experiments
corroborating our theoretical findings, showing that the proposed model
displays dynamics that are rich-enough to approximate many common nonlinear
systems used for benchmarking
Training Echo State Networks with Regularization through Dimensionality Reduction
In this paper we introduce a new framework to train an Echo State Network to
predict real valued time-series. The method consists in projecting the output
of the internal layer of the network on a space with lower dimensionality,
before training the output layer to learn the target task. Notably, we enforce
a regularization constraint that leads to better generalization capabilities.
We evaluate the performances of our approach on several benchmark tests, using
different techniques to train the readout of the network, achieving superior
predictive performance when using the proposed framework. Finally, we provide
an insight on the effectiveness of the implemented mechanics through a
visualization of the trajectory in the phase space and relying on the
methodologies of nonlinear time-series analysis. By applying our method on well
known chaotic systems, we provide evidence that the lower dimensional embedding
retains the dynamical properties of the underlying system better than the
full-dimensional internal states of the network
A characterization of the Edge of Criticality in Binary Echo State Networks
Echo State Networks (ESNs) are simplified recurrent neural network models
composed of a reservoir and a linear, trainable readout layer. The reservoir is
tunable by some hyper-parameters that control the network behaviour. ESNs are
known to be effective in solving tasks when configured on a region in
(hyper-)parameter space called \emph{Edge of Criticality} (EoC), where the
system is maximally sensitive to perturbations hence affecting its behaviour.
In this paper, we propose binary ESNs, which are architecturally equivalent to
standard ESNs but consider binary activation functions and binary recurrent
weights. For these networks, we derive a closed-form expression for the EoC in
the autonomous case and perform simulations in order to assess their behavior
in the case of noisy neurons and in the presence of a signal. We propose a
theoretical explanation for the fact that the variance of the input plays a
major role in characterizing the EoC
- …