988 research outputs found
Simulation of networks of spiking neurons: A review of tools and strategies
We review different aspects of the simulation of spiking neural networks. We
start by reviewing the different types of simulation strategies and algorithms
that are currently implemented. We next review the precision of those
simulation strategies, in particular in cases where plasticity depends on the
exact timing of the spikes. We overview different simulators and simulation
environments presently available (restricted to those freely available, open
source and documented). For each simulation tool, its advantages and pitfalls
are reviewed, with an aim to allow the reader to identify which simulator is
appropriate for a given task. Finally, we provide a series of benchmark
simulations of different types of networks of spiking neurons, including
Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based
or conductance-based synapses, using clock-driven or event-driven integration
strategies. The same set of models are implemented on the different simulators,
and the codes are made available. The ultimate goal of this review is to
provide a resource to facilitate identifying the appropriate integration
strategy and simulation tool to use for a given modeling problem related to
spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of
Computational Neuroscience, in press (2007
On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses
We present a mathematical analysis of a networks with Integrate-and-Fire
neurons and adaptive conductances. Taking into account the realistic fact that
the spike time is only known within some \textit{finite} precision, we propose
a model where spikes are effective at times multiple of a characteristic time
scale , where can be \textit{arbitrary} small (in particular,
well beyond the numerical precision). We make a complete mathematical
characterization of the model-dynamics and obtain the following results. The
asymptotic dynamics is composed by finitely many stable periodic orbits, whose
number and period can be arbitrary large and can diverge in a region of the
synaptic weights space, traditionally called the "edge of chaos", a notion
mathematically well defined in the present paper. Furthermore, except at the
edge of chaos, there is a one-to-one correspondence between the membrane
potential trajectories and the raster plot. This shows that the neural code is
entirely "in the spikes" in this case. As a key tool, we introduce an order
parameter, easy to compute numerically, and closely related to a natural notion
of entropy, providing a relevant characterization of the computational
capabilities of the network. This allows us to compare the computational
capabilities of leaky and Integrate-and-Fire models and conductance based
models. The present study considers networks with constant input, and without
time-dependent plasticity, but the framework has been designed for both
extensions.Comment: 36 pages, 9 figure
Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance
The subthresholdmembranevoltage of a neuron in active cortical tissue is
a fluctuating quantity with a distribution that reflects the firing statistics
of the presynaptic population. It was recently found that conductancebased
synaptic drive can lead to distributions with a significant skew.
Here it is demonstrated that the underlying shot noise caused by Poissonian
spike arrival also skews the membrane distribution, but in the opposite
sense. Using a perturbative method, we analyze the effects of shot
noise on the distribution of synaptic conductances and calculate the consequent
voltage distribution. To first order in the perturbation theory, the
voltage distribution is a gaussian modulated by a prefactor that captures
the skew. The gaussian component is identical to distributions derived
using current-based models with an effective membrane time constant.
The well-known effective-time-constant approximation can therefore be
identified as the leading-order solution to the full conductance-based
model. The higher-order modulatory prefactor containing the skew comprises
terms due to both shot noise and conductance fluctuations. The
diffusion approximation misses these shot-noise effects implying that
analytical approaches such as the Fokker-Planck equation or simulation
with filtered white noise cannot be used to improve on the gaussian approximation.
It is further demonstrated that quantities used for fitting
theory to experiment, such as the voltage mean and variance, are robust
against these non-Gaussian effects. The effective-time-constant approximation
is therefore relevant to experiment and provides a simple analytic
base on which other pertinent biological details may be added
A Markovian event-based framework for stochastic spiking neural networks
In spiking neural networks, the information is conveyed by the spike times,
that depend on the intrinsic dynamics of each neuron, the input they receive
and on the connections between neurons. In this article we study the Markovian
nature of the sequence of spike times in stochastic neural networks, and in
particular the ability to deduce from a spike train the next spike time, and
therefore produce a description of the network activity only based on the spike
times regardless of the membrane potential process.
To study this question in a rigorous manner, we introduce and study an
event-based description of networks of noisy integrate-and-fire neurons, i.e.
that is based on the computation of the spike times. We show that the firing
times of the neurons in the networks constitute a Markov chain, whose
transition probability is related to the probability distribution of the
interspike interval of the neurons in the network. In the cases where the
Markovian model can be developed, the transition probability is explicitly
derived in such classical cases of neural networks as the linear
integrate-and-fire neuron models with excitatory and inhibitory interactions,
for different types of synapses, possibly featuring noisy synaptic integration,
transmission delays and absolute and relative refractory period. This covers
most of the cases that have been investigated in the event-based description of
spiking deterministic neural networks
A mean-field model for conductance-based networks of adaptive exponential integrate-and-fire neurons
Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of
neocortical processing at mesoscopic scales. Since VSDi signals report the
average membrane potential, it seems natural to use a mean-field formalism to
model such signals. Here, we investigate a mean-field model of networks of
Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based
synaptic interactions. The AdEx model can capture the spiking response of
different cell types, such as regular-spiking (RS) excitatory neurons and
fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism,
together with a semi-analytic approach to the transfer function of AdEx
neurons. We compare the predictions of this mean-field model to simulated
networks of RS-FS cells, first at the level of the spontaneous activity of the
network, which is well predicted by the mean-field model. Second, we
investigate the response of the network to time-varying external input, and
show that the mean-field model accurately predicts the response time course of
the population. One notable exception was that the "tail" of the response at
long times was not well predicted, because the mean-field does not include
adaptation mechanisms. We conclude that the Master Equation formalism can yield
mean-field models that predict well the behavior of nonlinear networks with
conductance-based interactions and various electrophysiolgical properties, and
should be a good candidate to model VSDi signals where both excitatory and
inhibitory neurons contribute.Comment: 21 pages, 7 figure
Adaptive self-organization in a realistic neural network model
Information processing in complex systems is often found to be maximally
efficient close to critical states associated with phase transitions. It is
therefore conceivable that also neural information processing operates close to
criticality. This is further supported by the observation of power-law
distributions, which are a hallmark of phase transitions. An important open
question is how neural networks could remain close to a critical point while
undergoing a continual change in the course of development, adaptation,
learning, and more. An influential contribution was made by Bornholdt and
Rohlf, introducing a generic mechanism of robust self-organized criticality in
adaptive networks. Here, we address the question whether this mechanism is
relevant for real neural networks. We show in a realistic model that
spike-time-dependent synaptic plasticity can self-organize neural networks
robustly toward criticality. Our model reproduces several empirical
observations and makes testable predictions on the distribution of synaptic
strength, relating them to the critical state of the network. These results
suggest that the interplay between dynamics and topology may be essential for
neural information processing.Comment: 6 pages, 4 figure
Statistics of spike trains in conductance-based neural networks: Rigorous results
We consider a conductance based neural network inspired by the generalized
Integrate and Fire model introduced by Rudolph and Destexhe. We show the
existence and uniqueness of a unique Gibbs distribution characterizing spike
train statistics. The corresponding Gibbs potential is explicitly computed.
These results hold in presence of a time-dependent stimulus and apply therefore
to non-stationary dynamics.Comment: 42 pages, 1 figure, to appear in Journal of Mathematical Neuroscienc
- …