3,239 research outputs found
Statistical Mechanics of Recurrent Neural Networks I. Statics
A lecture notes style review of the equilibrium statistical mechanics of
recurrent neural networks with discrete and continuous neurons (e.g. Ising,
coupled-oscillators). To be published in the Handbook of Biological Physics
(North-Holland). Accompanied by a similar review (part II) dealing with the
dynamics.Comment: 49 pages, LaTe
Random Recurrent Neural Networks Dynamics
This paper is a review dealing with the study of large size random recurrent
neural networks. The connection weights are selected according to a probability
law and it is possible to predict the network dynamics at a macroscopic scale
using an averaging principle. After a first introductory section, the section 1
reviews the various models from the points of view of the single neuron
dynamics and of the global network dynamics. A summary of notations is
presented, which is quite helpful for the sequel. In section 2, mean-field
dynamics is developed.
The probability distribution characterizing global dynamics is computed. In
section 3, some applications of mean-field theory to the prediction of chaotic
regime for Analog Formal Random Recurrent Neural Networks (AFRRNN) are
displayed. The case of AFRRNN with an homogeneous population of neurons is
studied in section 4. Then, a two-population model is studied in section 5. The
occurrence of a cyclo-stationary chaos is displayed using the results of
\cite{Dauce01}. In section 6, an insight of the application of mean-field
theory to IF networks is given using the results of \cite{BrunelHakim99}.Comment: Review paper, 36 pages, 5 figure
Spike trains statistics in Integrate and Fire Models: exact results
We briefly review and highlight the consequences of rigorous and exact
results obtained in \cite{cessac:10}, characterizing the statistics of spike
trains in a network of leaky Integrate-and-Fire neurons, where time is discrete
and where neurons are subject to noise, without restriction on the synaptic
weights connectivity. The main result is that spike trains statistics are
characterized by a Gibbs distribution, whose potential is explicitly
computable. This establishes, on one hand, a rigorous ground for the current
investigations attempting to characterize real spike trains data with Gibbs
distributions, such as the Ising-like distribution, using the maximal entropy
principle. However, it transpires from the present analysis that the Ising
model might be a rather weak approximation. Indeed, the Gibbs potential (the
formal "Hamiltonian") is the log of the so-called "conditional intensity" (the
probability that a neuron fires given the past of the whole network). But, in
the present example, this probability has an infinite memory, and the
corresponding process is non-Markovian (resp. the Gibbs potential has infinite
range). Moreover, causality implies that the conditional intensity does not
depend on the state of the neurons at the \textit{same time}, ruling out the
Ising model as a candidate for an exact characterization of spike trains
statistics. However, Markovian approximations can be proposed whose degree of
approximation can be rigorously controlled. In this setting, Ising model
appears as the "next step" after the Bernoulli model (independent neurons)
since it introduces spatial pairwise correlations, but not time correlations.
The range of validity of this approximation is discussed together with possible
approaches allowing to introduce time correlations, with algorithmic
extensions.Comment: 6 pages, submitted to conference NeuroComp2010
http://2010.neurocomp.fr/; Bruno Cessac
http://www-sop.inria.fr/neuromathcomp
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
The complexity of dynamics in small neural circuits
Mean-field theory is a powerful tool for studying large neural networks.
However, when the system is composed of a few neurons, macroscopic differences
between the mean-field approximation and the real behavior of the network can
arise. Here we introduce a study of the dynamics of a small firing-rate network
with excitatory and inhibitory populations, in terms of local and global
bifurcations of the neural activity. Our approach is analytically tractable in
many respects, and sheds new light on the finite-size effects of the system. In
particular, we focus on the formation of multiple branching solutions of the
neural equations through spontaneous symmetry-breaking, since this phenomenon
increases considerably the complexity of the dynamical behavior of the network.
For these reasons, branching points may reveal important mechanisms through
which neurons interact and process information, which are not accounted for by
the mean-field approximation.Comment: 34 pages, 11 figures. Supplementary materials added, colors of
figures 8 and 9 fixed, results unchange
Neutral theory and scale-free neural dynamics
Avalanches of electrochemical activity in brain networks have been
empirically reported to obey scale-invariant behavior --characterized by
power-law distributions up to some upper cut-off-- both in vitro and in vivo.
Elucidating whether such scaling laws stem from the underlying neural dynamics
operating at the edge of a phase transition is a fascinating possibility, as
systems poised at criticality have been argued to exhibit a number of important
functional advantages. Here we employ a well-known model for neural dynamics
with synaptic plasticity, to elucidate an alternative scenario in which
neuronal avalanches can coexist, overlapping in time, but still remaining
scale-free. Remarkably their scale-invariance does not stem from underlying
criticality nor self-organization at the edge of a continuous phase transition.
Instead, it emerges from the fact that perturbations to the system exhibit a
neutral drift --guided by demographic fluctuations-- with respect to endogenous
spontaneous activity. Such a neutral dynamics --similar to the one in neutral
theories of population genetics-- implies marginal propagation of activity,
characterized by power-law distributed causal avalanches. Importantly, our
results underline the importance of considering causal information --on which
neuron triggers the firing of which-- to properly estimate the statistics of
avalanches of neural activity. We discuss the implications of these findings
both in modeling and to elucidate experimental observations, as well as its
possible consequences for actual neural dynamics and information processing in
actual neural networks.Comment: Main text: 8 pages, 3 figures. Supplementary information: 5 pages, 4
figure
- …