400 research outputs found

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Spike trains statistics in Integrate and Fire Models: exact results

    Get PDF
    We briefly review and highlight the consequences of rigorous and exact results obtained in \cite{cessac:10}, characterizing the statistics of spike trains in a network of leaky Integrate-and-Fire neurons, where time is discrete and where neurons are subject to noise, without restriction on the synaptic weights connectivity. The main result is that spike trains statistics are characterized by a Gibbs distribution, whose potential is explicitly computable. This establishes, on one hand, a rigorous ground for the current investigations attempting to characterize real spike trains data with Gibbs distributions, such as the Ising-like distribution, using the maximal entropy principle. However, it transpires from the present analysis that the Ising model might be a rather weak approximation. Indeed, the Gibbs potential (the formal "Hamiltonian") is the log of the so-called "conditional intensity" (the probability that a neuron fires given the past of the whole network). But, in the present example, this probability has an infinite memory, and the corresponding process is non-Markovian (resp. the Gibbs potential has infinite range). Moreover, causality implies that the conditional intensity does not depend on the state of the neurons at the \textit{same time}, ruling out the Ising model as a candidate for an exact characterization of spike trains statistics. However, Markovian approximations can be proposed whose degree of approximation can be rigorously controlled. In this setting, Ising model appears as the "next step" after the Bernoulli model (independent neurons) since it introduces spatial pairwise correlations, but not time correlations. The range of validity of this approximation is discussed together with possible approaches allowing to introduce time correlations, with algorithmic extensions.Comment: 6 pages, submitted to conference NeuroComp2010 http://2010.neurocomp.fr/; Bruno Cessac http://www-sop.inria.fr/neuromathcomp

    Linear response for spiking neuronal networks with unbounded memory

    Get PDF
    We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allows us to predict the influence of a weak amplitude time-dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how linear response is explicitly related to neuronal dynamics with an example, the gIF model, introduced by M. Rudolph and A. Destexhe. This example illustrates the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike statistics. We illustrate our results with numerical simulations.Comment: 60 pages, 8 figure

    A mean-field model for conductance-based networks of adaptive exponential integrate-and-fire neurons

    Full text link
    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at mesoscopic scales. Since VSDi signals report the average membrane potential, it seems natural to use a mean-field formalism to model such signals. Here, we investigate a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. The AdEx model can capture the spiking response of different cell types, such as regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the mean-field model. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model accurately predicts the response time course of the population. One notable exception was that the "tail" of the response at long times was not well predicted, because the mean-field does not include adaptation mechanisms. We conclude that the Master Equation formalism can yield mean-field models that predict well the behavior of nonlinear networks with conductance-based interactions and various electrophysiolgical properties, and should be a good candidate to model VSDi signals where both excitatory and inhibitory neurons contribute.Comment: 21 pages, 7 figure

    Stochastic firing rate models

    Full text link
    We review a recent approach to the mean-field limits in neural networks that takes into account the stochastic nature of input current and the uncertainty in synaptic coupling. This approach was proved to be a rigorous limit of the network equations in a general setting, and we express here the results in a more customary and simpler framework. We propose a heuristic argument to derive these equations providing a more intuitive understanding of their origin. These equations are characterized by a strong coupling between the different moments of the solutions. We analyse the equations, present an algorithm to simulate the solutions of these mean-field equations, and investigate numerically the equations. In particular, we build a bridge between these equations and Sompolinsky and collaborators approach (1988, 1990), and show how the coupling between the mean and the covariance function deviates from customary approaches

    Finite-size and correlation-induced effects in Mean-field Dynamics

    Full text link
    The brain's activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise. However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity. In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article, we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will differ from the infinite system
    • …
    corecore