253 research outputs found
Channel noise induced stochastic facilitation in an auditory brainstem neuron model
Neuronal membrane potentials fluctuate stochastically due to conductance
changes caused by random transitions between the open and close states of ion
channels. Although it has previously been shown that channel noise can
nontrivially affect neuronal dynamics, it is unknown whether ion-channel noise
is strong enough to act as a noise source for hypothesised noise-enhanced
information processing in real neuronal systems, i.e. 'stochastic
facilitation.' Here, we demonstrate that biophysical models of channel noise
can give rise to two kinds of recently discovered stochastic facilitation
effects in a Hodgkin-Huxley-like model of auditory brainstem neurons. The
first, known as slope-based stochastic resonance (SBSR), enables phasic neurons
to emit action potentials that can encode the slope of inputs that vary slowly
relative to key time-constants in the model. The second, known as inverse
stochastic resonance (ISR), occurs in tonically firing neurons when small
levels of noise inhibit tonic firing and replace it with burst-like dynamics.
Consistent with previous work, we conclude that channel noise can provide
significant variability in firing dynamics, even for large numbers of channels.
Moreover, our results show that possible associated computational benefits may
occur due to channel noise in neurons of the auditory brainstem. This holds
whether the firing dynamics in the model are phasic (SBSR can occur due to
channel noise) or tonic (ISR can occur due to channel noise).Comment: Published by Physical Review E, November 2013 (this version 17 pages
total - 10 text, 1 refs, 6 figures/tables); Associated matlab code is
available online in the ModelDB repository at
http://senselab.med.yale.edu/ModelDB/ShowModel.asp?model=15148
Mean Field description of and propagation of chaos in recurrent multipopulation networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons
We derive the mean-field equations arising as the limit of a network of
interacting spiking neurons, as the number of neurons goes to infinity. The
neurons belong to a fixed number of populations and are represented either by
the Hodgkin-Huxley model or by one of its simplified version, the
Fitzhugh-Nagumo model. The synapses between neurons are either electrical or
chemical. The network is assumed to be fully connected. The maximum
conductances vary randomly. Under the condition that all neurons initial
conditions are drawn independently from the same law that depends only on the
population they belong to, we prove that a propagation of chaos phenomenon
takes places, namely that in the mean-field limit, any finite number of neurons
become independent and, within each population, have the same probability
distribution. This probability distribution is solution of a set of implicit
equations, either nonlinear stochastic differential equations resembling the
McKean-Vlasov equations, or non-local partial differential equations resembling
the McKean-Vlasov-Fokker- Planck equations. We prove the well-posedness of
these equations, i.e. the existence and uniqueness of a solution. We also show
the results of some preliminary numerical experiments that indicate that the
mean-field equations are a good representation of the mean activity of a finite
size network, even for modest sizes. These experiment also indicate that the
McKean-Vlasov-Fokker- Planck equations may be a good way to understand the
mean-field dynamics through, e.g., a bifurcation analysis.Comment: 55 pages, 9 figure
Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model
We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of
neuronal action potentials, in parameter regimes characterised by mixed-mode
oscillations. The interspike time interval is related to the random number of
small-amplitude oscillations separating consecutive spikes. We prove that this
number has an asymptotically geometric distribution, whose parameter is related
to the principal eigenvalue of a substochastic Markov chain. We provide
rigorous bounds on this eigenvalue in the small-noise regime, and derive an
approximation of its dependence on the system's parameters for a large range of
noise intensities. This yields a precise description of the probability
distribution of observed mixed-mode patterns and interspike intervals.Comment: 36 page
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs
Mathematical models of cellular physiological mechanisms often involve random
walks on graphs representing transitions within networks of functional states.
Schmandt and Gal\'{a}n recently introduced a novel stochastic shielding
approximation as a fast, accurate method for generating approximate sample
paths from a finite state Markov process in which only a subset of states are
observable. For example, in ion channel models, such as the Hodgkin-Huxley or
other conductance based neural models, a nerve cell has a population of ion
channels whose states comprise the nodes of a graph, only some of which allow a
transmembrane current to pass. The stochastic shielding approximation consists
of neglecting fluctuations in the dynamics associated with edges in the graph
not directly affecting the observable states. We consider the problem of
finding the optimal complexity reducing mapping from a stochastic process on a
graph to an approximate process on a smaller sample space, as determined by the
choice of a particular linear measurement functional on the graph. The
partitioning of ion channel states into conducting versus nonconducting states
provides a case in point. In addition to establishing that Schmandt and
Gal\'{a}n's approximation is in fact optimal in a specific sense, we use recent
results from random matrix theory to provide heuristic error estimates for the
accuracy of the stochastic shielding approximation for an ensemble of random
graphs. Moreover, we provide a novel quantitative measure of the contribution
of individual transitions within the reaction graph to the accuracy of the
approximate process.Comment: Added one reference, typos corrected in Equation 6 and Appendix C,
added the assumption that the graph is irreducible to the main theorem
(results unchanged
Sequential Neural Posterior and Likelihood Approximation
We introduce the sequential neural posterior and likelihood approximation
(SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference
in implicit models, and therefore is a simulation-based inference method that
only requires simulations from a generative model. SNPLA avoids Markov chain
Monte Carlo sampling and correction-steps of the parameter proposal function
that are introduced in similar methods, but that can be numerically unstable or
restrictive. By utilizing the reverse KL divergence, SNPLA manages to learn
both the likelihood and the posterior in a sequential manner. Over four
experiments, we show that SNPLA performs competitively when utilizing the same
number of model simulations as used in other methods, even though the inference
problem for SNPLA is more complex due to the joint learning of posterior and
likelihood function. Due to utilizing normalizing flows SNPLA generates
posterior draws much faster (4 orders of magnitude) than MCMC-based methods.Comment: 28 pages, 8 tables, 14 figures. The supplementary material is
attached to the main pape
- …