20,029 research outputs found
Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons
Echo state networks (ESN), a type of reservoir computing (RC) architecture,
are efficient and accurate artificial neural systems for time series processing
and learning. An ESN consists of a core of recurrent neural networks, called a
reservoir, with a small number of tunable parameters to generate a
high-dimensional representation of an input, and a readout layer which is
easily trained using regression to produce a desired output from the reservoir
states. Certain computational tasks involve real-time calculation of high-order
time correlations, which requires nonlinear transformation either in the
reservoir or the readout layer. Traditional ESN employs a reservoir with
sigmoid or tanh function neurons. In contrast, some types of biological neurons
obey response curves that can be described as a product unit rather than a sum
and threshold. Inspired by this class of neurons, we introduce a RC
architecture with a reservoir of product nodes for time series computation. We
find that the product RC shows many properties of standard ESN such as
short-term memory and nonlinear capacity. On standard benchmarks for chaotic
prediction tasks, the product RC maintains the performance of a standard
nonlinear ESN while being more amenable to mathematical analysis. Our study
provides evidence that such networks are powerful in highly nonlinear tasks
owing to high-order statistics generated by the recurrent product node
reservoir
Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control
It is widely accepted that the complex dynamics characteristic of recurrent
neural circuits contributes in a fundamental manner to brain function. Progress
has been slow in understanding and exploiting the computational power of
recurrent dynamics for two main reasons: nonlinear recurrent networks often
exhibit chaotic behavior and most known learning rules do not work in robust
fashion in recurrent networks. Here we address both these problems by
demonstrating how random recurrent networks (RRN) that initially exhibit
chaotic dynamics can be tuned through a supervised learning rule to generate
locally stable neural patterns of activity that are both complex and robust to
noise. The outcome is a novel neural network regime that exhibits both
transiently stable and chaotic trajectories. We further show that the recurrent
learning rule dramatically increases the ability of RRNs to generate complex
spatiotemporal motor patterns, and accounts for recent experimental data
showing a decrease in neural variability in response to stimulus onset
full-FORCE: A Target-Based Method for Training Recurrent Networks
Trained recurrent networks are powerful tools for modeling dynamic neural
computations. We present a target-based method for modifying the full
connectivity matrix of a recurrent network to train it to perform tasks
involving temporally complex input/output transformations. The method
introduces a second network during training to provide suitable "target"
dynamics useful for performing the task. Because it exploits the full recurrent
connectivity, the method produces networks that perform tasks with fewer
neurons and greater noise robustness than traditional least-squares (FORCE)
approaches. In addition, we show how introducing additional input signals into
the target-generating network, which act as task hints, greatly extends the
range of tasks that can be learned and provides control over the complexity and
nature of the dynamics of the trained, task-performing network.Comment: 20 pages, 8 figure
Adaptive, locally-linear models of complex dynamics
The dynamics of complex systems generally include high-dimensional,
non-stationary and non-linear behavior, all of which pose fundamental
challenges to quantitative understanding. To address these difficulties we
detail a new approach based on local linear models within windows determined
adaptively from the data. While the dynamics within each window are simple,
consisting of exponential decay, growth and oscillations, the collection of
local parameters across all windows provides a principled characterization of
the full time series. To explore the resulting model space, we develop a novel
likelihood-based hierarchical clustering and we examine the eigenvalues of the
linear dynamics. We demonstrate our analysis with the Lorenz system undergoing
stable spiral dynamics and in the standard chaotic regime. Applied to the
posture dynamics of the nematode our approach identifies
fine-grained behavioral states and model dynamics which fluctuate close to an
instability boundary, and we detail a bifurcation in a transition from forward
to backward crawling. Finally, we analyze whole-brain imaging in
and show that the stability of global brain states changes with oxygen
concentration.Comment: 25 pages, 16 figure
Neural networks with dynamical synapses: from mixed-mode oscillations and spindles to chaos
Understanding of short-term synaptic depression (STSD) and other forms of
synaptic plasticity is a topical problem in neuroscience. Here we study the
role of STSD in the formation of complex patterns of brain rhythms. We use a
cortical circuit model of neural networks composed of irregular spiking
excitatory and inhibitory neurons having type 1 and 2 excitability and
stochastic dynamics. In the model, neurons form a sparsely connected network
and their spontaneous activity is driven by random spikes representing synaptic
noise. Using simulations and analytical calculations, we found that if the STSD
is absent, the neural network shows either asynchronous behavior or regular
network oscillations depending on the noise level. In networks with STSD,
changing parameters of synaptic plasticity and the noise level, we observed
transitions to complex patters of collective activity: mixed-mode and spindle
oscillations, bursts of collective activity, and chaotic behaviour.
Interestingly, these patterns are stable in a certain range of the parameters
and separated by critical boundaries. Thus, the parameters of synaptic
plasticity can play a role of control parameters or switchers between different
network states. However, changes of the parameters caused by a disease may lead
to dramatic impairment of ongoing neural activity. We analyze the chaotic
neural activity by use of the 0-1 test for chaos (Gottwald, G. & Melbourne, I.,
2004) and show that it has a collective nature.Comment: 7 pages, Proceedings of 12th Granada Seminar, September 17-21, 201
- β¦