1,006 research outputs found

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Are the input parameters of white-noise-driven integrate-and-fire neurons uniquely determined by rate and CV?

    Full text link
    Integrate-and-fire (IF) neurons have found widespread applications in computational neuroscience. Particularly important are stochastic versions of these models where the driving consists of a synaptic input modeled as white Gaussian noise with mean μ\mu and noise intensity DD. Different IF models have been proposed, the firing statistics of which depends nontrivially on the input parameters μ\mu and DD. In order to compare these models among each other, one must first specify the correspondence between their parameters. This can be done by determining which set of parameters (μ\mu, DD) of each model is associated to a given set of basic firing statistics as, for instance, the firing rate and the coefficient of variation (CV) of the interspike interval (ISI). However, it is not clear {\em a priori} whether for a given firing rate and CV there is only one unique choice of input parameters for each model. Here we review the dependence of rate and CV on input parameters for the perfect, leaky, and quadratic IF neuron models and show analytically that indeed in these three models the firing rate and the CV uniquely determine the input parameters

    A comparative study of different integrate-and-fire neurons: spontaneous activity, dynamical response, and stimulus-induced correlation

    Full text link
    Stochastic integrate-and-fire (IF) neuron models have found widespread applications in computational neuroscience. Here we present results on the white-noise-driven perfect, leaky, and quadratic IF models, focusing on the spectral statistics (power spectra, cross spectra, and coherence functions) in different dynamical regimes (noise-induced and tonic firing regimes with low or moderate noise). We make the models comparable by tuning parameters such that the mean value and the coefficient of variation of the interspike interval match for all of them. We find that, under these conditions, the power spectrum under white-noise stimulation is often very similar while the response characteristics, described by the cross spectrum between a fraction of the input noise and the output spike train, can differ drastically. We also investigate how the spike trains of two neurons of the same kind (e.g. two leaky IF neurons) correlate if they share a common noise input. We show that, depending on the dynamical regime, either two quadratic IF models or two leaky IFs are more strongly correlated. Our results suggest that, when choosing among simple IF models for network simulations, the details of the model have a strong effect on correlation and regularity of the output.Comment: 12 page

    Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality

    Full text link
    The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., τ\tau-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht

    Timescales of spike-train correlation for neural oscillators with common drive

    Full text link
    We examine the effect of the phase-resetting curve (PRC) on the transfer of correlated input signals into correlated output spikes in a class of neural models receiving noisy, super-threshold stimulation. We use linear response theory to approximate the spike correlation coefficient in terms of moments of the associated exit time problem, and contrast the results for Type I vs. Type II models and across the different timescales over which spike correlations can be assessed. We find that, on long timescales, Type I oscillators transfer correlations much more efficiently than Type II oscillators. On short timescales this trend reverses, with the relative efficiency switching at a timescale that depends on the mean and standard deviation of input currents. This switch occurs over timescales that could be exploited by downstream circuits

    Effective long-time phase dynamics of limit-cycle oscillators driven by weak colored noise

    Full text link
    An effective white-noise Langevin equation is derived that describes long-time phase dynamics of a limit-cycle oscillator subjected to weak stationary colored noise. Effective drift and diffusion coefficients are given in terms of the phase sensitivity of the oscillator and the correlation function of the noise, and are explicitly calculated for oscillators with sinusoidal phase sensitivity functions driven by two typical colored Gaussian processes. The results are verified by numerical simulations using several types of stochastic or chaotic noise. The drift and diffusion coefficients of oscillators driven by chaotic noise exhibit anomalous dependence on the oscillator frequency, reflecting the peculiar power spectrum of the chaotic noise.Comment: 16 pages, 6 figure

    Motif Statistics and Spike Correlations in Neuronal Networks

    Get PDF
    Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network model of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second-order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state

    Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons

    Full text link
    The macroscopic dynamics of large populations of neurons can be mathematically analyzed using low-dimensional firing-rate or neural-mass models. However, these models fail to capture spike synchronization effects of stochastic spiking neurons such as the non-stationary population response to rapidly changing stimuli. Here, we derive low-dimensional firing-rate models for homogeneous populations of general renewal-type neurons, including integrate-and-fire models driven by white noise. Renewal models account for neuronal refractoriness and spike synchronization dynamics. The derivation is based on an eigenmode expansion of the associated refractory density equation, which generalizes previous spectral methods for Fokker-Planck equations to arbitrary renewal models. We find a simple relation between the eigenvalues, which determine the characteristic time scales of the firing rate dynamics, and the Laplace transform of the interspike interval density or the survival function of the renewal process. Analytical expressions for the Laplace transforms are readily available for many renewal models including the leaky integrate-and-fire model. Retaining only the first eigenmode yields already an adequate low-dimensional approximation of the firing-rate dynamics that captures spike synchronization effects and fast transient dynamics at stimulus onset. We explicitly demonstrate the validity of our model for a large homogeneous population of Poisson neurons with absolute refractoriness, and other renewal models that admit an explicit analytical calculation of the eigenvalues. The here presented eigenmode expansion provides a systematic framework for novel firing-rate models in computational neuroscience based on spiking neuron dynamics with refractoriness.Comment: 24 pages, 7 figure

    Feature selection in simple neurons: how coding depends on spiking dynamics

    Full text link
    The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes, and a nonlinear function that relates stimulus to firing probability. In many sensory systems, these two components of the coding strategy are found to adapt to changes in the statistics of the inputs, in such a way as to improve information transmission. Here, we show for two simple neuron models how feature selectivity as captured by the spike-triggered average depends both on the parameters of the model and on the statistical characteristics of the input.Comment: 23 Pages, LaTeX + 4 Figures. v2 is substantially expanded and revised. v3 corrects minor errors in Sec. 3.
    corecore