1,670 research outputs found
Effect of dilution in asymmetric recurrent neural networks
We study with numerical simulation the possible limit behaviors of
synchronous discrete-time deterministic recurrent neural networks composed of N
binary neurons as a function of a network's level of dilution and asymmetry.
The network dilution measures the fraction of neuron couples that are
connected, and the network asymmetry measures to what extent the underlying
connectivity matrix is asymmetric. For each given neural network, we study the
dynamical evolution of all the different initial conditions, thus
characterizing the full dynamical landscape without imposing any learning rule.
Because of the deterministic dynamics, each trajectory converges to an
attractor, that can be either a fixed point or a limit cycle. These attractors
form the set of all the possible limit behaviors of the neural network. For
each network, we then determine the convergence times, the limit cycles'
length, the number of attractors, and the sizes of the attractors' basin. We
show that there are two network structures that maximize the number of possible
limit behaviors. The first optimal network structure is fully-connected and
symmetric. On the contrary, the second optimal network structure is highly
sparse and asymmetric. The latter optimal is similar to what observed in
different biological neuronal circuits. These observations lead us to
hypothesize that independently from any given learning model, an efficient and
effective biologic network that stores a number of limit behaviors close to its
maximum capacity tends to develop a connectivity structure similar to one of
the optimal networks we found.Comment: 31 pages, 5 figure
Information driven self-organization of complex robotic behaviors
Information theory is a powerful tool to express principles to drive
autonomous systems because it is domain invariant and allows for an intuitive
interpretation. This paper studies the use of the predictive information (PI),
also called excess entropy or effective measure complexity, of the sensorimotor
process as a driving force to generate behavior. We study nonlinear and
nonstationary systems and introduce the time-local predicting information
(TiPI) which allows us to derive exact results together with explicit update
rules for the parameters of the controller in the dynamical systems framework.
In this way the information principle, formulated at the level of behavior, is
translated to the dynamics of the synapses. We underpin our results with a
number of case studies with high-dimensional robotic systems. We show the
spontaneous cooperativity in a complex physical system with decentralized
control. Moreover, a jointly controlled humanoid robot develops a high
behavioral variety depending on its physics and the environment it is
dynamically embedded into. The behavior can be decomposed into a succession of
low-dimensional modes that increasingly explore the behavior space. This is a
promising way to avoid the curse of dimensionality which hinders learning
systems to scale well.Comment: 29 pages, 12 figure
Random Recurrent Neural Networks Dynamics
This paper is a review dealing with the study of large size random recurrent
neural networks. The connection weights are selected according to a probability
law and it is possible to predict the network dynamics at a macroscopic scale
using an averaging principle. After a first introductory section, the section 1
reviews the various models from the points of view of the single neuron
dynamics and of the global network dynamics. A summary of notations is
presented, which is quite helpful for the sequel. In section 2, mean-field
dynamics is developed.
The probability distribution characterizing global dynamics is computed. In
section 3, some applications of mean-field theory to the prediction of chaotic
regime for Analog Formal Random Recurrent Neural Networks (AFRRNN) are
displayed. The case of AFRRNN with an homogeneous population of neurons is
studied in section 4. Then, a two-population model is studied in section 5. The
occurrence of a cyclo-stationary chaos is displayed using the results of
\cite{Dauce01}. In section 6, an insight of the application of mean-field
theory to IF networks is given using the results of \cite{BrunelHakim99}.Comment: Review paper, 36 pages, 5 figure
Robust synchronization of a class of coupled delayed networks with multiple stochastic disturbances: The continuous-time case
In this paper, the robust synchronization problem is investigated for a new class of continuous-time complex networks that involve parameter uncertainties, time-varying delays, constant and delayed couplings, as well as multiple stochastic
disturbances. The norm-bounded uncertainties exist in all the network parameters after decoupling, and the stochastic disturbances are assumed to be Brownian motions that act on the constant coupling term, the delayed coupling term as well as the overall network dynamics. Such multiple stochastic disturbances could reflect more realistic dynamical behaviors of the coupled complex network presented within a noisy environment. By using a combination of the Lyapunov functional method, the robust analysis tool, the stochastic analysis techniques and the properties of Kronecker product, we derive several delay-dependent sufficient conditions that ensure the coupled complex network to be globally robustly synchronized in the mean square for all admissible parameter uncertainties. The criteria obtained in this paper are in the form of linear matrix inequalities (LMIs) whose solution can be easily calculated by using the standard numerical software. The main results are shown to be general enough to cover many existing ones reported in the literature. Simulation examples are presented to demonstrate the feasibility and applicability of the proposed results
New synchronization criteria for an array of neural networks with hybrid coupling and time-varying delays
This paper is concerned with the global exponential synchronization for an array of hybrid coupled neural networks with time-varying leakage delay, discrete and distributed delays. Applying a novel Lyapunov functional and the property of outer coupling matrices of the neural networks, sufficient conditions are obtained for the global exponential synchronization of the system. The derived synchronization criteria are closely related with the time-varying delays and the coupling structure of the networks. The maximal allowable upper bounds of the time-varying delays can be obtained guaranteeing the global synchronization for the neural networks. The method we adopt in this paper is different from the commonly used linear matrix inequality (LMI) technique, and our synchronization conditions are new, which are easy to check in comparison with the previously reported LMI-based ones. Some examples are given to show the effectiveness of the obtained theoretical results
On the number of limit cycles in asymmetric neural networks
The comprehension of the mechanisms at the basis of the functioning of
complexly interconnected networks represents one of the main goals of
neuroscience. In this work, we investigate how the structure of recurrent
connectivity influences the ability of a network to have storable patterns and
in particular limit cycles, by modeling a recurrent neural network with
McCulloch-Pitts neurons as a content-addressable memory system.
A key role in such models is played by the connectivity matrix, which, for
neural networks, corresponds to a schematic representation of the "connectome":
the set of chemical synapses and electrical junctions among neurons. The shape
of the recurrent connectivity matrix plays a crucial role in the process of
storing memories. This relation has already been exposed by the work of Tanaka
and Edwards, which presents a theoretical approach to evaluate the mean number
of fixed points in a fully connected model at thermodynamic limit.
Interestingly, further studies on the same kind of model but with a finite
number of nodes have shown how the symmetry parameter influences the types of
attractors featured in the system. Our study extends the work of Tanaka and
Edwards by providing a theoretical evaluation of the mean number of attractors
of any given length for different degrees of symmetry in the connectivity
matrices.Comment: 35 pages, 12 figure
Collective stability of networks of winner-take-all circuits
The neocortex has a remarkably uniform neuronal organization, suggesting that
common principles of processing are employed throughout its extent. In
particular, the patterns of connectivity observed in the superficial layers of
the visual cortex are consistent with the recurrent excitation and inhibitory
feedback required for cooperative-competitive circuits such as the soft
winner-take-all (WTA). WTA circuits offer interesting computational properties
such as selective amplification, signal restoration, and decision making. But,
these properties depend on the signal gain derived from positive feedback, and
so there is a critical trade-off between providing feedback strong enough to
support the sophisticated computations, while maintaining overall circuit
stability. We consider the question of how to reason about stability in very
large distributed networks of such circuits. We approach this problem by
approximating the regular cortical architecture as many interconnected
cooperative-competitive modules. We demonstrate that by properly understanding
the behavior of this small computational module, one can reason over the
stability and convergence of very large networks composed of these modules. We
obtain parameter ranges in which the WTA circuit operates in a high-gain
regime, is stable, and can be aggregated arbitrarily to form large stable
networks. We use nonlinear Contraction Theory to establish conditions for
stability in the fully nonlinear case, and verify these solutions using
numerical simulations. The derived bounds allow modes of operation in which the
WTA network is multi-stable and exhibits state-dependent persistent activities.
Our approach is sufficiently general to reason systematically about the
stability of any network, biological or technological, composed of networks of
small modules that express competition through shared inhibition.Comment: 7 Figure
- …