11,707 research outputs found
Spike train statistics and Gibbs distributions
This paper is based on a lecture given in the LACONEU summer school,
Valparaiso, January 2012. We introduce Gibbs distribution in a general setting,
including non stationary dynamics, and present then three examples of such
Gibbs distributions, in the context of neural networks spike train statistics:
(i) Maximum entropy model with spatio-temporal constraints; (ii) Generalized
Linear Models; (iii) Conductance based Inte- grate and Fire model with chemical
synapses and gap junctions.Comment: 23 pages, submitte
Estimation in discretely observed diffusions killed at a threshold
Parameter estimation in diffusion processes from discrete observations up to
a first-hitting time is clearly of practical relevance, but does not seem to
have been studied so far. In neuroscience, many models for the membrane
potential evolution involve the presence of an upper threshold. Data are
modeled as discretely observed diffusions which are killed when the threshold
is reached. Statistical inference is often based on the misspecified likelihood
ignoring the presence of the threshold causing severe bias, e.g. the bias
incurred in the drift parameters of the Ornstein-Uhlenbeck model for biological
relevant parameters can be up to 25-100%. We calculate or approximate the
likelihood function of the killed process. When estimating from a single
trajectory, considerable bias may still be present, and the distribution of the
estimates can be heavily skewed and with a huge variance. Parametric bootstrap
is effective in correcting the bias. Standard asymptotic results do not apply,
but consistency and asymptotic normality may be recovered when multiple
trajectories are observed, if the mean first-passage time through the threshold
is finite. Numerical examples illustrate the results and an experimental data
set of intracellular recordings of the membrane potential of a motoneuron is
analyzed.Comment: 29 pages, 5 figure
Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses
We investigate the effect of electric synapses (gap junctions) on collective
neuronal dynamics and spike statistics in a conductance-based
Integrate-and-Fire neural network, driven by a Brownian noise, where
conductances depend upon spike history. We compute explicitly the time
evolution operator and show that, given the spike-history of the network and
the membrane potentials at a given time, the further dynamical evolution can be
written in a closed form. We show that spike train statistics is described by a
Gibbs distribution whose potential can be approximated with an explicit
formula, when the noise is weak. This potential form encompasses existing
models for spike trains statistics analysis such as maximum entropy models or
Generalized Linear Models (GLM). We also discuss the different types of
correlations: those induced by a shared stimulus and those induced by neurons
interactions.Comment: 42 pages, 1 figure, submitte
Analysis of Nonlinear Noisy Integrate\&Fire Neuron Models: blow-up and steady states
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks
can be written as Fokker-Planck-Kolmogorov equations on the probability density
of neurons, the main parameters in the model being the connectivity of the
network and the noise. We analyse several aspects of the NNLIF model: the
number of steady states, a priori estimates, blow-up issues and convergence
toward equilibrium in the linear case. In particular, for excitatory networks,
blow-up always occurs for initial data concentrated close to the firing
potential. These results show how critical is the balance between noise and
excitatory/inhibitory interactions to the connectivity parameter
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Mean Field description of and propagation of chaos in recurrent multipopulation networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons
We derive the mean-field equations arising as the limit of a network of
interacting spiking neurons, as the number of neurons goes to infinity. The
neurons belong to a fixed number of populations and are represented either by
the Hodgkin-Huxley model or by one of its simplified version, the
Fitzhugh-Nagumo model. The synapses between neurons are either electrical or
chemical. The network is assumed to be fully connected. The maximum
conductances vary randomly. Under the condition that all neurons initial
conditions are drawn independently from the same law that depends only on the
population they belong to, we prove that a propagation of chaos phenomenon
takes places, namely that in the mean-field limit, any finite number of neurons
become independent and, within each population, have the same probability
distribution. This probability distribution is solution of a set of implicit
equations, either nonlinear stochastic differential equations resembling the
McKean-Vlasov equations, or non-local partial differential equations resembling
the McKean-Vlasov-Fokker- Planck equations. We prove the well-posedness of
these equations, i.e. the existence and uniqueness of a solution. We also show
the results of some preliminary numerical experiments that indicate that the
mean-field equations are a good representation of the mean activity of a finite
size network, even for modest sizes. These experiment also indicate that the
McKean-Vlasov-Fokker- Planck equations may be a good way to understand the
mean-field dynamics through, e.g., a bifurcation analysis.Comment: 55 pages, 9 figure
- …