121 research outputs found
Extracting non-linear integrate-and-fire models from experimental data using dynamic I–V curves
The dynamic I–V curve method was recently introduced for the efficient experimental generation of reduced neuron models. The method extracts the response properties of a neuron while it is subject to a naturalistic stimulus that mimics in vivo-like fluctuating synaptic drive. The resulting history-dependent, transmembrane current is then projected onto a one-dimensional current–voltage relation that provides the basis for a tractable non-linear integrate-and-fire model. An attractive feature of the method is that it can be used in spike-triggered mode to quantify the distinct patterns of post-spike refractoriness seen in different classes of cortical neuron. The method is first illustrated using a conductance-based model and is then applied experimentally to generate reduced models of cortical layer-5 pyramidal cells and interneurons, in injected-current and injected- conductance protocols. The resulting low-dimensional neuron models—of the refractory exponential integrate-and-fire type—provide highly accurate predictions for spike-times. The method therefore provides a useful tool for the construction of tractable models and rapid experimental classification of cortical neurons
Motif Statistics and Spike Correlations in Neuronal Networks
Motifs are patterns of subgraphs of complex networks. We studied the impact
of such patterns of connectivity on the level of correlated, or synchronized,
spiking activity among pairs of cells in a recurrent network model of integrate
and fire neurons. For a range of network architectures, we find that the
pairwise correlation coefficients, averaged across the network, can be closely
approximated using only three statistics of network connectivity. These are the
overall network connection probability and the frequencies of two second-order
motifs: diverging motifs, in which one cell provides input to two others, and
chain motifs, in which two cells are connected via a third intermediary cell.
Specifically, the prevalence of diverging and chain motifs tends to increase
correlation. Our method is based on linear response theory, which enables us to
express spiking statistics using linear algebra, and a resumming technique,
which extrapolates from second order motifs to predict the overall effect of
coupling on network correlation. Our motif-based results seek to isolate the
effect of network architecture perturbatively from a known network state
A comparative study of different integrate-and-fire neurons: spontaneous activity, dynamical response, and stimulus-induced correlation
Stochastic integrate-and-fire (IF) neuron models have found widespread
applications in computational neuroscience. Here we present results on the
white-noise-driven perfect, leaky, and quadratic IF models, focusing on the
spectral statistics (power spectra, cross spectra, and coherence functions) in
different dynamical regimes (noise-induced and tonic firing regimes with low or
moderate noise). We make the models comparable by tuning parameters such that
the mean value and the coefficient of variation of the interspike interval
match for all of them. We find that, under these conditions, the power spectrum
under white-noise stimulation is often very similar while the response
characteristics, described by the cross spectrum between a fraction of the
input noise and the output spike train, can differ drastically. We also
investigate how the spike trains of two neurons of the same kind (e.g. two
leaky IF neurons) correlate if they share a common noise input. We show that,
depending on the dynamical regime, either two quadratic IF models or two leaky
IFs are more strongly correlated. Our results suggest that, when choosing among
simple IF models for network simulations, the details of the model have a
strong effect on correlation and regularity of the output.Comment: 12 page
Population density equations for stochastic processes with memory kernels
We present a method for solving population density equations (PDEs)–-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process
Numerical Solution of Differential Equations by the Parker-Sochacki Method
A tutorial is presented which demonstrates the theory and usage of the
Parker-Sochacki method of numerically solving systems of differential
equations. Solutions are demonstrated for the case of projectile motion in air,
and for the classical Newtonian N-body problem with mutual gravitational
attraction.Comment: Added in July 2010: This tutorial has been posted since 1998 on a
university web site, but has now been cited and praised in one or more
refereed journals. I am therefore submitting it to the Cornell arXiv so that
it may be read in response to its citations. See "Spiking neural network
simulation: numerical integration with the Parker-Sochacki method:" J. Comput
Neurosci, Robert D. Stewart & Wyeth Bair and
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2717378
A Fokker-Planck formalism for diffusion with finite increments and absorbing boundaries
Gaussian white noise is frequently used to model fluctuations in physical
systems. In Fokker-Planck theory, this leads to a vanishing probability density
near the absorbing boundary of threshold models. Here we derive the boundary
condition for the stationary density of a first-order stochastic differential
equation for additive finite-grained Poisson noise and show that the response
properties of threshold units are qualitatively altered. Applied to the
integrate-and-fire neuron model, the response turns out to be instantaneous
rather than exhibiting low-pass characteristics, highly non-linear, and
asymmetric for excitation and inhibition. The novel mechanism is exhibited on
the network level and is a generic property of pulse-coupled systems of
threshold units.Comment: Consists of two parts: main article (3 figures) plus supplementary
text (3 extra figures
The location of the axon initial segment affects the bandwidth of spike initiation dynamics
The dynamics and the sharp onset of action potential (AP) generation have recently been the subject of intense experimental and theoretical investigations. According to the resistive coupling theory, an electrotonic interplay between the site of AP initiation in the axon and the somato-dendritic load determines the AP waveform. This phenomenon not only alters the shape of AP recorded at the soma, but also determines the dynamics of excitability across a variety of time scales. Supporting this statement, here we generalize a previous numerical study and extend it to the quantification of the input-output gain of the neuronal dynamical response. We consider three classes of multicompartmental mathematical models, ranging from ball-and-stick simplified descriptions of neuronal excitability to 3D-reconstructed biophysical models of excitatory neurons of rodent and human cortical tissue. For each model, we demonstrate that increasing the distance between the axonal site of AP initiation and the soma markedly increases the bandwidth of neuronal response properties. We finally consider the Liquid State Machine paradigm, exploring the impact of altering the site of AP initiation at the level of a neuronal population, and demonstrate that an optimal distance exists to boost the computational performance of the network in a simple classification task. Copyright
From Spiking Neuron Models to Linear-Nonlinear Models
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates
Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process
Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential
Representation of Dynamical Stimuli in Populations of Threshold Neurons
Many sensory or cognitive events are associated with dynamic current modulations in cortical neurons. This raises an urgent demand for tractable model approaches addressing the merits and limits of potential encoding strategies. Yet, current theoretical approaches addressing the response to mean- and variance-encoded stimuli rarely provide complete response functions for both modes of encoding in the presence of correlated noise. Here, we investigate the neuronal population response to dynamical modifications of the mean or variance of the synaptic bombardment using an alternative threshold model framework. In the variance and mean channel, we provide explicit expressions for the linear and non-linear frequency response functions in the presence of correlated noise and use them to derive population rate response to step-like stimuli. For mean-encoded signals, we find that the complete response function depends only on the temporal width of the input correlation function, but not on other functional specifics. Furthermore, we show that both mean- and variance-encoded signals can relay high-frequency inputs, and in both schemes step-like changes can be detected instantaneously. Finally, we obtain the pairwise spike correlation function and the spike triggered average from the linear mean-evoked response function. These results provide a maximally tractable limiting case that complements and extends previous results obtained in the integrate and fire framework
- …