286,084 research outputs found
Random Recurrent Neural Networks Dynamics
This paper is a review dealing with the study of large size random recurrent
neural networks. The connection weights are selected according to a probability
law and it is possible to predict the network dynamics at a macroscopic scale
using an averaging principle. After a first introductory section, the section 1
reviews the various models from the points of view of the single neuron
dynamics and of the global network dynamics. A summary of notations is
presented, which is quite helpful for the sequel. In section 2, mean-field
dynamics is developed.
The probability distribution characterizing global dynamics is computed. In
section 3, some applications of mean-field theory to the prediction of chaotic
regime for Analog Formal Random Recurrent Neural Networks (AFRRNN) are
displayed. The case of AFRRNN with an homogeneous population of neurons is
studied in section 4. Then, a two-population model is studied in section 5. The
occurrence of a cyclo-stationary chaos is displayed using the results of
\cite{Dauce01}. In section 6, an insight of the application of mean-field
theory to IF networks is given using the results of \cite{BrunelHakim99}.Comment: Review paper, 36 pages, 5 figure
Dynamics of Interacting Neural Networks
The dynamics of interacting perceptrons is solved analytically. For a
directed flow of information the system runs into a state which has a higher
symmetry than the topology of the model. A symmetry breaking phase transition
is found with increasing learning rate. In addition it is shown that a system
of interacting perceptrons which is trained on the history of its minority
decisions develops a good strategy for the problem of adaptive competition
known as the Bar Problem or Minority Game.Comment: 9 pages, 3 figures; typos corrected, content reorganize
Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control
It is widely accepted that the complex dynamics characteristic of recurrent
neural circuits contributes in a fundamental manner to brain function. Progress
has been slow in understanding and exploiting the computational power of
recurrent dynamics for two main reasons: nonlinear recurrent networks often
exhibit chaotic behavior and most known learning rules do not work in robust
fashion in recurrent networks. Here we address both these problems by
demonstrating how random recurrent networks (RRN) that initially exhibit
chaotic dynamics can be tuned through a supervised learning rule to generate
locally stable neural patterns of activity that are both complex and robust to
noise. The outcome is a novel neural network regime that exhibits both
transiently stable and chaotic trajectories. We further show that the recurrent
learning rule dramatically increases the ability of RRNs to generate complex
spatiotemporal motor patterns, and accounts for recent experimental data
showing a decrease in neural variability in response to stimulus onset
Topology and Dynamics of Attractor Neural Networks: The Role of Loopiness
We derive an exact representation of the topological effect on the dynamics
of sequence processing neural networks within signal-to-noise analysis. A new
network structure parameter, loopiness coefficient, is introduced to
quantitatively study the loop effect on network dynamics. The large loopiness
coefficient means the large probability of finding loops in the networks. We
develop the recursive equations for the overlap parameters of neural networks
in the term of the loopiness. It was found that the large loopiness increases
the correlations among the network states at different times, and eventually it
reduces the performance of neural networks. The theory is applied to several
network topological structures, including fully-connected, densely-connected
random, densely-connected regular, and densely-connected small-world, where
encouraging results are obtained.Comment: 6 pages, 4 figures, comments are favore
The Synthesis of Arbitrary Stable Dynamics in Non-linear Neural Networks II: Feedback and Universality
We wish to construct a realization theory of stable neural networks and use this theory to model the variety of stable dynamics apparent in natural data. Such a theory should have numerous applications to constructing specific artificial neural networks with desired dynamical behavior. The networks used in this theory should have well understood dynamics yet be as diverse as possible to capture natural diversity.
In this article, I describe a parameterized family of higher order, gradient-like neural networks which have known arbitrary equilibria with unstable manifolds of known specified dimension. Moreover, any system with hyperbolic dynamics is conjugate to one of these systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric. fits to known stable systems, is either non-constructive, lacks generality, or has unspecified attracting equilibria.
More specifically, We construct a parameterized family of gradient-like neural networks with a simple feedback rule which will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data, on nested periodic orbits.Air Force Office of Scientific Research (90-0128
- …