375 research outputs found

    Are the input parameters of white-noise-driven integrate-and-fire neurons uniquely determined by rate and CV?

    Full text link
    Integrate-and-fire (IF) neurons have found widespread applications in computational neuroscience. Particularly important are stochastic versions of these models where the driving consists of a synaptic input modeled as white Gaussian noise with mean μ\mu and noise intensity DD. Different IF models have been proposed, the firing statistics of which depends nontrivially on the input parameters μ\mu and DD. In order to compare these models among each other, one must first specify the correspondence between their parameters. This can be done by determining which set of parameters (μ\mu, DD) of each model is associated to a given set of basic firing statistics as, for instance, the firing rate and the coefficient of variation (CV) of the interspike interval (ISI). However, it is not clear {\em a priori} whether for a given firing rate and CV there is only one unique choice of input parameters for each model. Here we review the dependence of rate and CV on input parameters for the perfect, leaky, and quadratic IF neuron models and show analytically that indeed in these three models the firing rate and the CV uniquely determine the input parameters

    Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality

    Full text link
    The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., Ï„\tau-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht

    Fokker–Planck and Fortet Equation-Based Parameter Estimation for a Leaky Integrate-and-Fire Model with Sinusoidal and Stochastic Forcing

    Get PDF
    Abstract Analysis of sinusoidal noisy leaky integrate-and-fire models and comparison with experimental data are important to understand the neural code and neural synchronization and rhythms. In this paper, we propose two methods to estimate input parameters using interspike interval data only. One is based on numerical solutions of the Fokker–Planck equation, and the other is based on an integral equation, which is fulfilled by the interspike interval probability density. This generalizes previous methods tailored to stationary data to the case of time-dependent input. The main contribution is a binning method to circumvent the problems of nonstationarity, and an easy-to-implement initializer for the numerical procedures. The methods are compared on simulated data. List of Abbreviations LIF: Leaky integrate-and-fire ISI: Interspike interval SDE: Stochastic differential equation PDE: Partial differential equatio

    Exact firing time statistics of neurons driven by discrete inhibitory noise

    Get PDF
    Neurons in the intact brain receive a continuous and irregular synaptic bombardment from excitatory and inhibitory pre-synaptic neurons, which determines the firing activity of the stimulated neuron. In order to investigate the influence of inhibitory stimulation on the firing time statistics, we consider Leaky Integrate-and-Fire neurons subject to inhibitory instantaneous post-synaptic potentials. In particular, we report exact results for the firing rate, the coefficient of variation and the spike train spectrum for various synaptic weight distributions. Our results are not limited to stimulations of infinitesimal amplitude, but they apply as well to finite amplitude post-synaptic potentials, thus being able to capture the effect of rare and large spikes. The developed methods are able to reproduce also the average firing properties of heterogeneous neuronal populations.Comment: 20 pages, 8 Figures, submitted to Scientific Report

    Responses of Leaky Integrate-and-Fire Neurons to a Plurality of Stimuli in Their Receptive Fields

    Get PDF
    A fundamental question concerning the way the visual world is represented in our brain is how a cortical cell responds when its classical receptive field contains a plurality of stimuli. Two opposing models have been proposed. In the response-averaging model, the neuron responds with a weighted average of all individual stimuli. By contrast, in the probability-mixing model, the cell responds to a plurality of stimuli as if only one of the stimuli were present. Here we apply the probability-mixing and the response-averaging model to leaky integrate-and-fire neurons, to describe neuronal behavior based on observed spike trains. We first estimate the parameters of either model using numerical methods, and then test which model is most likely to have generated the observed data. Results show that the parameters can be successfully estimated and the two models are distinguishable using model selection

    A unified approach to linking experimental, statistical and computational analysis of spike train data

    Get PDF
    A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach – linking statistical, computational, and experimental neuroscience – provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.Published versio
    • …
    corecore