877 research outputs found
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks
Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons
A constructive mean field analysis of multi population neural networks with random synaptic weights and stochastic inputs
We deal with the problem of bridging the gap between two scales in neuronal
modeling. At the first (microscopic) scale, neurons are considered individually
and their behavior described by stochastic differential equations that govern
the time variations of their membrane potentials. They are coupled by synaptic
connections acting on their resulting activity, a nonlinear function of their
membrane potential. At the second (mesoscopic) scale, interacting populations
of neurons are described individually by similar equations. The equations
describing the dynamical and the stationary mean field behaviors are considered
as functional equations on a set of stochastic processes. Using this new point
of view allows us to prove that these equations are well-posed on any finite
time interval and to provide a constructive method for effectively computing
their unique solution. This method is proved to converge to the unique solution
and we characterize its complexity and convergence rate. We also provide
partial results for the stationary problem on infinite time intervals. These
results shed some new light on such neural mass models as the one of Jansen and
Rit \cite{jansen-rit:95}: their dynamics appears as a coarse approximation of
the much richer dynamics that emerges from our analysis. Our numerical
experiments confirm that the framework we propose and the numerical methods we
derive from it provide a new and powerful tool for the exploration of neural
behaviors at different scales.Comment: 55 pages, 4 figures, to appear in "Frontiers in Neuroscience
Linear response for spiking neuronal networks with unbounded memory
We establish a general linear response relation for spiking neuronal
networks, based on chains with unbounded memory. This relation allows us to
predict the influence of a weak amplitude time-dependent external stimuli on
spatio-temporal spike correlations, from the spontaneous statistics (without
stimulus) in a general context where the memory in spike dynamics can extend
arbitrarily far in the past. Using this approach, we show how linear response
is explicitly related to neuronal dynamics with an example, the gIF model,
introduced by M. Rudolph and A. Destexhe. This example illustrates the
collective effect of the stimuli, intrinsic neuronal dynamics, and network
connectivity on spike statistics. We illustrate our results with numerical
simulations.Comment: 60 pages, 8 figure
Herding as a Learning System with Edge-of-Chaos Dynamics
Herding defines a deterministic dynamical system at the edge of chaos. It
generates a sequence of model states and parameters by alternating parameter
perturbations with state maximizations, where the sequence of states can be
interpreted as "samples" from an associated MRF model. Herding differs from
maximum likelihood estimation in that the sequence of parameters does not
converge to a fixed point and differs from an MCMC posterior sampling approach
in that the sequence of states is generated deterministically. Herding may be
interpreted as a"perturb and map" method where the parameter perturbations are
generated using a deterministic nonlinear dynamical system rather than randomly
from a Gumbel distribution. This chapter studies the distinct statistical
characteristics of the herding algorithm and shows that the fast convergence
rate of the controlled moments may be attributed to edge of chaos dynamics. The
herding algorithm can also be generalized to models with latent variables and
to a discriminative learning setting. The perceptron cycling theorem ensures
that the fast moment matching property is preserved in the more general
framework
A "Cellular Neuronal" Approach to Optimization Problems
The Hopfield-Tank (1985) recurrent neural network architecture for the
Traveling Salesman Problem is generalized to a fully interconnected "cellular"
neural network of regular oscillators. Tours are defined by synchronization
patterns, allowing the simultaneous representation of all cyclic permutations
of a given tour. The network converges to local optima some of which correspond
to shortest-distance tours, as can be shown analytically in a stationary phase
approximation. Simulated annealing is required for global optimization, but the
stochastic element might be replaced by chaotic intermittency in a further
generalization of the architecture to a network of chaotic oscillators.Comment: -2nd revised version submitted to Chaos (original version submitted
6/07
Dynamical criticality in the collective activity of a population of retinal neurons
Recent experimental results based on multi-electrode and imaging techniques
have reinvigorated the idea that large neural networks operate near a critical
point, between order and disorder. However, evidence for criticality has relied
on the definition of arbitrary order parameters, or on models that do not
address the dynamical nature of network activity. Here we introduce a novel
approach to assess criticality that overcomes these limitations, while
encompassing and generalizing previous criteria. We find a simple model to
describe the global activity of large populations of ganglion cells in the rat
retina, and show that their statistics are poised near a critical point. Taking
into account the temporal dynamics of the activity greatly enhances the
evidence for criticality, revealing it where previous methods would not. The
approach is general and could be used in other biological networks
- …