1,064 research outputs found
A comparative study of different integrate-and-fire neurons: spontaneous activity, dynamical response, and stimulus-induced correlation
Stochastic integrate-and-fire (IF) neuron models have found widespread
applications in computational neuroscience. Here we present results on the
white-noise-driven perfect, leaky, and quadratic IF models, focusing on the
spectral statistics (power spectra, cross spectra, and coherence functions) in
different dynamical regimes (noise-induced and tonic firing regimes with low or
moderate noise). We make the models comparable by tuning parameters such that
the mean value and the coefficient of variation of the interspike interval
match for all of them. We find that, under these conditions, the power spectrum
under white-noise stimulation is often very similar while the response
characteristics, described by the cross spectrum between a fraction of the
input noise and the output spike train, can differ drastically. We also
investigate how the spike trains of two neurons of the same kind (e.g. two
leaky IF neurons) correlate if they share a common noise input. We show that,
depending on the dynamical regime, either two quadratic IF models or two leaky
IFs are more strongly correlated. Our results suggest that, when choosing among
simple IF models for network simulations, the details of the model have a
strong effect on correlation and regularity of the output.Comment: 12 page
Are the input parameters of white-noise-driven integrate-and-fire neurons uniquely determined by rate and CV?
Integrate-and-fire (IF) neurons have found widespread applications in
computational neuroscience. Particularly important are stochastic versions of
these models where the driving consists of a synaptic input modeled as white
Gaussian noise with mean and noise intensity . Different IF models
have been proposed, the firing statistics of which depends nontrivially on the
input parameters and . In order to compare these models among each
other, one must first specify the correspondence between their parameters. This
can be done by determining which set of parameters (, ) of each model
is associated to a given set of basic firing statistics as, for instance, the
firing rate and the coefficient of variation (CV) of the interspike interval
(ISI). However, it is not clear {\em a priori} whether for a given firing rate
and CV there is only one unique choice of input parameters for each model. Here
we review the dependence of rate and CV on input parameters for the perfect,
leaky, and quadratic IF neuron models and show analytically that indeed in
these three models the firing rate and the CV uniquely determine the input
parameters
Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality
The mutual information between stimulus and spike-train response is commonly
used to monitor neural coding efficiency, but neuronal computation broadly
conceived requires more refined and targeted information measures of
input-output joint processes. A first step towards that larger goal is to
develop information measures for individual output processes, including
information generation (entropy rate), stored information (statistical
complexity), predictable information (excess entropy), and active information
accumulation (bound information rate). We calculate these for spike trains
generated by a variety of noise-driven integrate-and-fire neurons as a function
of time resolution and for alternating renewal processes. We show that their
time-resolution dependence reveals coarse-grained structural properties of
interspike interval statistics; e.g., -entropy rates that diverge less
quickly than the firing rate indicate interspike interval correlations. We also
find evidence that the excess entropy and regularized statistical complexity of
different types of integrate-and-fire neurons are universal in the
continuous-time limit in the sense that they do not depend on mechanism
details. This suggests a surprising simplicity in the spike trains generated by
these model neurons. Interestingly, neurons with gamma-distributed ISIs and
neurons whose spike trains are alternating renewal processes do not fall into
the same universality class. These results lead to two conclusions. First, the
dependence of information measures on time resolution reveals mechanistic
details about spike train generation. Second, information measures can be used
as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht
One-Dimensional Population Density Approaches to Recurrently Coupled Networks of Neurons with Noise
Mean-field systems have been previously derived for networks of coupled,
two-dimensional, integrate-and-fire neurons such as the Izhikevich, adapting
exponential (AdEx) and quartic integrate and fire (QIF), among others.
Unfortunately, the mean-field systems have a degree of frequency error and the
networks analyzed often do not include noise when there is adaptation. Here, we
derive a one-dimensional partial differential equation (PDE) approximation for
the marginal voltage density under a first order moment closure for coupled
networks of integrate-and-fire neurons with white noise inputs. The PDE has
substantially less frequency error than the mean-field system, and provides a
great deal more information, at the cost of analytical tractability. The
convergence properties of the mean-field system in the low noise limit are
elucidated. A novel method for the analysis of the stability of the
asynchronous tonic firing solution is also presented and implemented. Unlike
previous attempts at stability analysis with these network types, information
about the marginal densities of the adaptation variables is used. This method
can in principle be applied to other systems with nonlinear partial
differential equations.Comment: 26 Pages, 6 Figure
Inferring Synaptic Structure in presence of Neural Interaction Time Scales
Biological networks display a variety of activity patterns reflecting a web
of interactions that is complex both in space and time. Yet inference methods
have mainly focused on reconstructing, from the network's activity, the spatial
structure, by assuming equilibrium conditions or, more recently, a
probabilistic dynamics with a single arbitrary time-step. Here we show that,
under this latter assumption, the inference procedure fails to reconstruct the
synaptic matrix of a network of integrate-and-fire neurons when the chosen time
scale of interaction does not closely match the synaptic delay or when no
single time scale for the interaction can be identified; such failure,
moreover, exposes a distinctive bias of the inference method that can lead to
infer as inhibitory the excitatory synapses with interaction time scales longer
than the model's time-step. We therefore introduce a new two-step method, that
first infers through cross-correlation profiles the delay-structure of the
network and then reconstructs the synaptic matrix, and successfully test it on
networks with different topologies and in different activity regimes. Although
step one is able to accurately recover the delay-structure of the network, thus
getting rid of any \textit{a priori} guess about the time scales of the
interaction, the inference method introduces nonetheless an arbitrary time
scale, the time-bin used to binarize the spike trains. We therefore
analytically and numerically study how the choice of affects the inference
in our network model, finding that the relationship between the inferred
couplings and the real synaptic efficacies, albeit being quadratic in both
cases, depends critically on for the excitatory synapses only, whilst
being basically independent of it for the inhibitory ones
A Markovian event-based framework for stochastic spiking neural networks
In spiking neural networks, the information is conveyed by the spike times,
that depend on the intrinsic dynamics of each neuron, the input they receive
and on the connections between neurons. In this article we study the Markovian
nature of the sequence of spike times in stochastic neural networks, and in
particular the ability to deduce from a spike train the next spike time, and
therefore produce a description of the network activity only based on the spike
times regardless of the membrane potential process.
To study this question in a rigorous manner, we introduce and study an
event-based description of networks of noisy integrate-and-fire neurons, i.e.
that is based on the computation of the spike times. We show that the firing
times of the neurons in the networks constitute a Markov chain, whose
transition probability is related to the probability distribution of the
interspike interval of the neurons in the network. In the cases where the
Markovian model can be developed, the transition probability is explicitly
derived in such classical cases of neural networks as the linear
integrate-and-fire neuron models with excitatory and inhibitory interactions,
for different types of synapses, possibly featuring noisy synaptic integration,
transmission delays and absolute and relative refractory period. This covers
most of the cases that have been investigated in the event-based description of
spiking deterministic neural networks
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
Analysis of Nonlinear Noisy Integrate\&Fire Neuron Models: blow-up and steady states
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks
can be written as Fokker-Planck-Kolmogorov equations on the probability density
of neurons, the main parameters in the model being the connectivity of the
network and the noise. We analyse several aspects of the NNLIF model: the
number of steady states, a priori estimates, blow-up issues and convergence
toward equilibrium in the linear case. In particular, for excitatory networks,
blow-up always occurs for initial data concentrated close to the firing
potential. These results show how critical is the balance between noise and
excitatory/inhibitory interactions to the connectivity parameter
On the dynamics of random neuronal networks
We study the mean-field limit and stationary distributions of a pulse-coupled
network modeling the dynamics of a large neuronal assemblies. Our model takes
into account explicitly the intrinsic randomness of firing times, contrasting
with the classical integrate-and-fire model. The ergodicity properties of the
Markov process associated to finite networks are investigated. We derive the
limit in distribution of the sample path of the state of a neuron of the
network when its size gets large. The invariant distributions of this limiting
stochastic process are analyzed as well as their stability properties. We show
that the system undergoes transitions as a function of the averaged
connectivity parameter, and can support trivial states (where the network
activity dies out, which is also the unique stationary state of finite networks
in some cases) and self-sustained activity when connectivity level is
sufficiently large, both being possibly stable.Comment: 37 pages, 3 figure
- …