2,895 research outputs found
Fast Inference of Interactions in Assemblies of Stochastic Integrate-and-Fire Neurons from Spike Recordings
We present two Bayesian procedures to infer the interactions and external
currents in an assembly of stochastic integrate-and-fire neurons from the
recording of their spiking activity. The first procedure is based on the exact
calculation of the most likely time courses of the neuron membrane potentials
conditioned by the recorded spikes, and is exact for a vanishing noise variance
and for an instantaneous synaptic integration. The second procedure takes into
account the presence of fluctuations around the most likely time courses of the
potentials, and can deal with moderate noise levels. The running time of both
procedures is proportional to the number S of spikes multiplied by the squared
number N of neurons. The algorithms are validated on synthetic data generated
by networks with known couplings and currents. We also reanalyze previously
published recordings of the activity of the salamander retina (including from
32 to 40 neurons, and from 65,000 to 170,000 spikes). We study the dependence
of the inferred interactions on the membrane leaking time; the differences and
similarities with the classical cross-correlation analysis are discussed.Comment: Accepted for publication in J. Comput. Neurosci. (dec 2010
Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance
The subthresholdmembranevoltage of a neuron in active cortical tissue is
a fluctuating quantity with a distribution that reflects the firing statistics
of the presynaptic population. It was recently found that conductancebased
synaptic drive can lead to distributions with a significant skew.
Here it is demonstrated that the underlying shot noise caused by Poissonian
spike arrival also skews the membrane distribution, but in the opposite
sense. Using a perturbative method, we analyze the effects of shot
noise on the distribution of synaptic conductances and calculate the consequent
voltage distribution. To first order in the perturbation theory, the
voltage distribution is a gaussian modulated by a prefactor that captures
the skew. The gaussian component is identical to distributions derived
using current-based models with an effective membrane time constant.
The well-known effective-time-constant approximation can therefore be
identified as the leading-order solution to the full conductance-based
model. The higher-order modulatory prefactor containing the skew comprises
terms due to both shot noise and conductance fluctuations. The
diffusion approximation misses these shot-noise effects implying that
analytical approaches such as the Fokker-Planck equation or simulation
with filtered white noise cannot be used to improve on the gaussian approximation.
It is further demonstrated that quantities used for fitting
theory to experiment, such as the voltage mean and variance, are robust
against these non-Gaussian effects. The effective-time-constant approximation
is therefore relevant to experiment and provides a simple analytic
base on which other pertinent biological details may be added
Revealing networks from dynamics: an introduction
What can we learn from the collective dynamics of a complex network about its
interaction topology? Taking the perspective from nonlinear dynamics, we
briefly review recent progress on how to infer structural connectivity (direct
interactions) from accessing the dynamics of the units. Potential applications
range from interaction networks in physics, to chemical and metabolic
reactions, protein and gene regulatory networks as well as neural circuits in
biology and electric power grids or wireless sensor networks in engineering.
Moreover, we briefly mention some standard ways of inferring effective or
functional connectivity.Comment: Topical review, 48 pages, 7 figure
Training deep neural density estimators to identify mechanistic models of neural dynamics
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics
Spike train statistics and Gibbs distributions
This paper is based on a lecture given in the LACONEU summer school,
Valparaiso, January 2012. We introduce Gibbs distribution in a general setting,
including non stationary dynamics, and present then three examples of such
Gibbs distributions, in the context of neural networks spike train statistics:
(i) Maximum entropy model with spatio-temporal constraints; (ii) Generalized
Linear Models; (iii) Conductance based Inte- grate and Fire model with chemical
synapses and gap junctions.Comment: 23 pages, submitte
Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method
Understanding the dynamics of neural networks is a major challenge in
experimental neuroscience. For that purpose, a modelling of the recorded
activity that reproduces the main statistics of the data is required. In a
first part, we present a review on recent results dealing with spike train
statistics analysis using maximum entropy models (MaxEnt). Most of these
studies have been focusing on modelling synchronous spike patterns, leaving
aside the temporal dynamics of the neural activity. However, the maximum
entropy principle can be generalized to the temporal case, leading to Markovian
models where memory effects and time correlations in the dynamics are properly
taken into account. In a second part, we present a new method based on
Monte-Carlo sampling which is suited for the fitting of large-scale
spatio-temporal MaxEnt models. The formalism and the tools presented here will
be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure
Linear response for spiking neuronal networks with unbounded memory
We establish a general linear response relation for spiking neuronal
networks, based on chains with unbounded memory. This relation allows us to
predict the influence of a weak amplitude time-dependent external stimuli on
spatio-temporal spike correlations, from the spontaneous statistics (without
stimulus) in a general context where the memory in spike dynamics can extend
arbitrarily far in the past. Using this approach, we show how linear response
is explicitly related to neuronal dynamics with an example, the gIF model,
introduced by M. Rudolph and A. Destexhe. This example illustrates the
collective effect of the stimuli, intrinsic neuronal dynamics, and network
connectivity on spike statistics. We illustrate our results with numerical
simulations.Comment: 60 pages, 8 figure
Characterizing synaptic conductance fluctuations in cortical neurons and their influence on spike generation
Cortical neurons are subject to sustained and irregular synaptic activity
which causes important fluctuations of the membrane potential (Vm). We review
here different methods to characterize this activity and its impact on spike
generation. The simplified, fluctuating point-conductance model of synaptic
activity provides the starting point of a variety of methods for the analysis
of intracellular Vm recordings. In this model, the synaptic excitatory and
inhibitory conductances are described by Gaussian-distributed stochastic
variables, or colored conductance noise. The matching of experimentally
recorded Vm distributions to an invertible theoretical expression derived from
the model allows the extraction of parameters characterizing the synaptic
conductance distributions. This analysis can be complemented by the matching of
experimental Vm power spectral densities (PSDs) to a theoretical template, even
though the unexpected scaling properties of experimental PSDs limit the
precision of this latter approach. Building on this stochastic characterization
of synaptic activity, we also propose methods to qualitatively and
quantitatively evaluate spike-triggered averages of synaptic time-courses
preceding spikes. This analysis points to an essential role for synaptic
conductance variance in determining spike times. The presented methods are
evaluated using controlled conductance injection in cortical neurons in vitro
with the dynamic-clamp technique. We review their applications to the analysis
of in vivo intracellular recordings in cat association cortex, which suggest a
predominant role for inhibition in determining both sub- and supra-threshold
dynamics of cortical neurons embedded in active networks.Comment: 9 figures, Journal of Neuroscience Methods (in press, 2008
- …