1,885 research outputs found

    Theoretical connections between mathematical neuronal models corresponding to different expressions of noise

    Get PDF
    Identifying the right tools to express the stochastic aspects of neural activity has proven to be one of the biggest challenges in computational neuroscience. Even if there is no definitive answer to this issue, the most common procedure to express this randomness is the use of stochastic models. In accordance with the origin of variability, the sources of randomness are classified as intrinsic or extrinsic and give rise to distinct mathematical frameworks to track down the dynamics of the cell. While the external variability is generally treated by the use of a Wiener process in models such as the Integrate-and-Fire model, the internal variability is mostly expressed via a random firing process. In this paper, we investigate how those distinct expressions of variability can be related. To do so, we examine the probability density functions to the corresponding stochastic models and investigate in what way they can be mapped one to another via integral transforms. Our theoretical findings offer a new insight view into the particular categories of variability and it confirms that, despite their contrasting nature, the mathematical formalization of internal and external variability are strikingly similar

    Response variability in balanced cortical networks

    Full text link
    We study the spike statistics of neurons in a network with dynamically balanced excitation and inhibition. Our model, intended to represent a generic cortical column, comprises randomly connected excitatory and inhibitory leaky integrate-and-fire neurons, driven by excitatory input from an external population. The high connectivity permits a mean-field description in which synaptic currents can be treated as Gaussian noise, the mean and autocorrelation function of which are calculated self-consistently from the firing statistics of single model neurons. Within this description, we find that the irregularity of spike trains is controlled mainly by the strength of the synapses relative to the difference between the firing threshold and the post-firing reset level of the membrane potential. For moderately strong synapses we find spike statistics very similar to those observed in primary visual cortex.Comment: 22 pages, 7 figures, submitted to Neural Computatio

    On the dynamics of random neuronal networks

    Full text link
    We study the mean-field limit and stationary distributions of a pulse-coupled network modeling the dynamics of a large neuronal assemblies. Our model takes into account explicitly the intrinsic randomness of firing times, contrasting with the classical integrate-and-fire model. The ergodicity properties of the Markov process associated to finite networks are investigated. We derive the limit in distribution of the sample path of the state of a neuron of the network when its size gets large. The invariant distributions of this limiting stochastic process are analyzed as well as their stability properties. We show that the system undergoes transitions as a function of the averaged connectivity parameter, and can support trivial states (where the network activity dies out, which is also the unique stationary state of finite networks in some cases) and self-sustained activity when connectivity level is sufficiently large, both being possibly stable.Comment: 37 pages, 3 figure

    Cortical cells should fire regularly, but do not

    Get PDF
    When a typical nerve cell is injected with enough current, it fires a regular stream of action potentials. But cortical cells in vivo usually fire irregularly, reflecting synaptic input from presynaptic cells as well as intrinsic biophysical properties. We have applied the theory of stochastic processes to spike trains recorded from cortical neurons (Tuckwell 1989) and find a fundamental contradiction between the large interspike variability observed and the much lower values predicted by well-accepted biophysical models of single cells

    Variability Measures of Positive Random Variables

    Get PDF
    During the stationary part of neuronal spiking response, the stimulus can be encoded in the firing rate, but also in the statistical structure of the interspike intervals. We propose and discuss two information-based measures of statistical dispersion of the interspike interval distribution, the entropy-based dispersion and Fisher information-based dispersion. The measures are compared with the frequently used concept of standard deviation. It is shown, that standard deviation is not well suited to quantify some aspects of dispersion that are often expected intuitively, such as the degree of randomness. The proposed dispersion measures are not entirely independent, although each describes the interspike intervals from a different point of view. The new methods are applied to common models of neuronal firing and to both simulated and experimental data

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Stochastic firing rate models

    Full text link
    We review a recent approach to the mean-field limits in neural networks that takes into account the stochastic nature of input current and the uncertainty in synaptic coupling. This approach was proved to be a rigorous limit of the network equations in a general setting, and we express here the results in a more customary and simpler framework. We propose a heuristic argument to derive these equations providing a more intuitive understanding of their origin. These equations are characterized by a strong coupling between the different moments of the solutions. We analyse the equations, present an algorithm to simulate the solutions of these mean-field equations, and investigate numerically the equations. In particular, we build a bridge between these equations and Sompolinsky and collaborators approach (1988, 1990), and show how the coupling between the mean and the covariance function deviates from customary approaches
    • …
    corecore