2,871 research outputs found
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks
Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons
Modeling networks of spiking neurons as interacting processes with memory of variable length
We consider a new class of non Markovian processes with a countable number of
interacting components, both in discrete and continuous time. Each component is
represented by a point process indicating if it has a spike or not at a given
time. The system evolves as follows. For each component, the rate (in
continuous time) or the probability (in discrete time) of having a spike
depends on the entire time evolution of the system since the last spike time of
the component. In discrete time this class of systems extends in a non trivial
way both Spitzer's interacting particle systems, which are Markovian, and
Rissanen's stochastic chains with memory of variable length which have finite
state space. In continuous time they can be seen as a kind of Rissanen's
variable length memory version of the class of self-exciting point processes
which are also called "Hawkes processes", however with infinitely many
components. These features make this class a good candidate to describe the
time evolution of networks of spiking neurons. In this article we present a
critical reader's guide to recent papers dealing with this class of models,
both in discrete and in continuous time. We briefly sketch results concerning
perfect simulation and existence issues, de-correlation between successive
interspike intervals, the longtime behavior of finite non-excited systems and
propagation of chaos in mean field systems
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
Noise-induced behaviors in neural mean field dynamics
The collective behavior of cortical neurons is strongly affected by the
presence of noise at the level of individual cells. In order to study these
phenomena in large-scale assemblies of neurons, we consider networks of
firing-rate neurons with linear intrinsic dynamics and nonlinear coupling,
belonging to a few types of cell populations and receiving noisy currents.
Asymptotic equations as the number of neurons tends to infinity (mean field
equations) are rigorously derived based on a probabilistic approach. These
equations are implicit on the probability distribution of the solutions which
generally makes their direct analysis difficult. However, in our case, the
solutions are Gaussian, and their moments satisfy a closed system of nonlinear
ordinary differential equations (ODEs), which are much easier to study than the
original stochastic network equations, and the statistics of the empirical
process uniformly converge towards the solutions of these ODEs. Based on this
description, we analytically and numerically study the influence of noise on
the collective behaviors, and compare these asymptotic regimes to simulations
of the network. We observe that the mean field equations provide an accurate
description of the solutions of the network equations for network sizes as
small as a few hundreds of neurons. In particular, we observe that the level of
noise in the system qualitatively modifies its collective behavior, producing
for instance synchronized oscillations of the whole network, desynchronization
of oscillating regimes, and stabilization or destabilization of stationary
solutions. These results shed a new light on the role of noise in shaping
collective dynamics of neurons, and gives us clues for understanding similar
phenomena observed in biological networks
Metastability in a stochastic neural network modeled as a velocity jump Markov process
One of the major challenges in neuroscience is to determine how noise that is
present at the molecular and cellular levels affects dynamics and information
processing at the macroscopic level of synaptically coupled neuronal
populations. Often noise is incorprated into deterministic network models using
extrinsic noise sources. An alternative approach is to assume that noise arises
intrinsically as a collective population effect, which has led to a master
equation formulation of stochastic neural networks. In this paper we extend the
master equation formulation by introducing a stochastic model of neural
population dynamics in the form of a velocity jump Markov process. The latter
has the advantage of keeping track of synaptic processing as well as spiking
activity, and reduces to the neural master equation in a particular limit. The
population synaptic variables evolve according to piecewise deterministic
dynamics, which depends on population spiking activity. The latter is
characterised by a set of discrete stochastic variables evolving according to a
jump Markov process, with transition rates that depend on the synaptic
variables. We consider the particular problem of rare transitions between
metastable states of a network operating in a bistable regime in the
deterministic limit. Assuming that the synaptic dynamics is much slower than
the transitions between discrete spiking states, we use a WKB approximation and
singular perturbation theory to determine the mean first passage time to cross
the separatrix between the two metastable states. Such an analysis can also be
applied to other velocity jump Markov processes, including stochastic
voltage-gated ion channels and stochastic gene networks
- …