1,044 research outputs found
Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control
It is widely accepted that the complex dynamics characteristic of recurrent
neural circuits contributes in a fundamental manner to brain function. Progress
has been slow in understanding and exploiting the computational power of
recurrent dynamics for two main reasons: nonlinear recurrent networks often
exhibit chaotic behavior and most known learning rules do not work in robust
fashion in recurrent networks. Here we address both these problems by
demonstrating how random recurrent networks (RRN) that initially exhibit
chaotic dynamics can be tuned through a supervised learning rule to generate
locally stable neural patterns of activity that are both complex and robust to
noise. The outcome is a novel neural network regime that exhibits both
transiently stable and chaotic trajectories. We further show that the recurrent
learning rule dramatically increases the ability of RRNs to generate complex
spatiotemporal motor patterns, and accounts for recent experimental data
showing a decrease in neural variability in response to stimulus onset
Quantification of reachable attractors in asynchronous discrete dynamics
Motivation: Models of discrete concurrent systems often lead to huge and
complex state transition graphs that represent their dynamics. This makes
difficult to analyse dynamical properties. In particular, for logical models of
biological regulatory networks, it is of real interest to study attractors and
their reachability from specific initial conditions, i.e. to assess the
potential asymptotical behaviours of the system. Beyond the identification of
the reachable attractors, we propose to quantify this reachability.
Results: Relying on the structure of the state transition graph, we estimate
the probability of each attractor reachable from a given initial condition or
from a portion of the state space. First, we present a quasi-exact solution
with an original algorithm called Firefront, based on the exhaustive
exploration of the reachable state space. Then, we introduce an adapted version
of Monte Carlo simulation algorithm, termed Avatar, better suited to larger
models. Firefront and Avatar methods are validated and compared to other
related approaches, using as test cases logical models of synthetic and
biological networks.
Availability: Both algorithms are implemented as Perl scripts that can be
freely downloaded from http://compbio.igc.gulbenkian.pt/nmd/node/59 along with
Supplementary Material.Comment: 19 pages, 2 figures, 2 algorithms and 2 table
Optimal modularity and memory capacity of neural reservoirs
The neural network is a powerful computing framework that has been exploited
by biological evolution and by humans for solving diverse problems. Although
the computational capabilities of neural networks are determined by their
structure, the current understanding of the relationships between a neural
network's architecture and function is still primitive. Here we reveal that
neural network's modular architecture plays a vital role in determining the
neural dynamics and memory performance of the network of threshold neurons. In
particular, we demonstrate that there exists an optimal modularity for memory
performance, where a balance between local cohesion and global connectivity is
established, allowing optimally modular networks to remember longer. Our
results suggest that insights from dynamical analysis of neural networks and
information spreading processes can be leveraged to better design neural
networks and may shed light on the brain's modular organization
Synthesizing attractors of Hindmarsh-Rose neuronal systems
In this paper a periodic parameter switching scheme is applied to the
Hindmarsh-Rose neuronal system to synthesize certain attractors. Results show
numerically, via computer graphic simulations, that the obtained synthesized
attractor belongs to the class of all admissible attractors for the
Hindmarsh-Rose neuronal system and matches the averaged attractor obtained with
the control parameter replaced with the averaged switched parameter values.
This feature allows us to imagine that living beings are able to maintain vital
behavior while the control parameter switches so that their dynamical behavior
is suitable for the given environment.Comment: published in Nonlinear Dynamic
- …