447 research outputs found

    Entropy-based parametric estimation of spike train statistics

    Full text link
    We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : "Reading out the code" consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : "are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ?" A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.Comment: 37 pages, 8 figures, submitte

    Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

    Full text link
    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure

    Information entropy production of maximum entropy markov chains from spike trains

    Get PDF
    "The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

    Linear response for spiking neuronal networks with unbounded memory

    Get PDF
    We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allows us to predict the influence of a weak amplitude time-dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how linear response is explicitly related to neuronal dynamics with an example, the gIF model, introduced by M. Rudolph and A. Destexhe. This example illustrates the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike statistics. We illustrate our results with numerical simulations.Comment: 60 pages, 8 figure

    Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences

    Full text link
    We present a new method for nonlinear prediction of discrete random sequences under minimal structural assumptions. We give a mathematical construction for optimal predictors of such processes, in the form of hidden Markov models. We then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which approximates the ideal predictor from data. We discuss the reliability of CSSR, its data requirements, and its performance in simulations. Finally, we compare our approach to existing methods using variable-length Markov models and cross-validated hidden Markov models, and show theoretically and experimentally that our method delivers results superior to the former and at least comparable to the latter.Comment: 8 pages, 4 figure

    Parametric Estimation of Gibbs distributions as generalized maximum-entropy models for the analysis of spike train statistics.

    Get PDF
    This work corresponds to an extended and revisited version of a previous Arxiv preprint, submitted to HAL as http://hal.inria.fr/inria-00534847/fr/We propose a generalization of the existing maximum entropy models used for spike trains statistics analysis. We bring a simple method to estimate Gibbs distributions, generalizing existing approaches based on Ising model or one step Markov chains to arbitrary parametric potentials. Our method enables one to take into account memory effects in dynamics. It provides directly the "free-energy" density and the Kullback-Leibler divergence between the empirical statistics and the statistical model. It does not assume a specific Gibbs potential form and does not require the assumption of detailed balance. Furthermore, it allows the comparison of different statistical models and offers a control of finite-size sampling effects, inherent to empirical statistics, by using large deviations results. A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.Nous proposons une généralisation des modèles d'entropie maximale existantes utilisées pour l'analyse statistique de trains de spikes. Nous apportons ici une méthode simple pour estimer les distributions de Gibbs, généraliser les approches existantes basées sur le modèle d'Ising, ou estimer en une seule étape des chaînes de Markov pour un potentiel paramétrique arbitraire. Notre méthode permet de prendre en compte les effets de mémoire dans la dynamique. Il fournit directement la densité de "l'énergie libre" et la divergence de Kullback-Leibler entre les statistiques empiriques et le modèle statistique. Il ne se limite pas une forme de potentiel de Gibbs spécifique, mais permet de le chosir, et ne nécessite pas l'hypothèse de "detailed balance". En outre, il permet la comparaison des différents modèles statistiques et offre un contrôle des effets de l'échantillonnage de taille finie, inhérente aux statistiques empiriques, en se basant sur des résultats de grandes déviation. Une validation numérique de la méthode est proposée et les perspectives en matière d'analyse de trains de spike sont également discutées

    Parametric Estimation of Gibbs distributions as generalized maximum-entropy models for the analysis of spike train statistics.

    Get PDF
    This work corresponds to an extended and revisited version of a previous Arxiv preprint, submitted to HAL as http://hal.inria.fr/inria-00534847/fr/We propose a generalization of the existing maximum entropy models used for spike trains statistics analysis. We bring a simple method to estimate Gibbs distributions, generalizing existing approaches based on Ising model or one step Markov chains to arbitrary parametric potentials. Our method enables one to take into account memory effects in dynamics. It provides directly the "free-energy" density and the Kullback-Leibler divergence between the empirical statistics and the statistical model. It does not assume a specific Gibbs potential form and does not require the assumption of detailed balance. Furthermore, it allows the comparison of different statistical models and offers a control of finite-size sampling effects, inherent to empirical statistics, by using large deviations results. A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.Nous proposons une généralisation des modèles d'entropie maximale existantes utilisées pour l'analyse statistique de trains de spikes. Nous apportons ici une méthode simple pour estimer les distributions de Gibbs, généraliser les approches existantes basées sur le modèle d'Ising, ou estimer en une seule étape des chaînes de Markov pour un potentiel paramétrique arbitraire. Notre méthode permet de prendre en compte les effets de mémoire dans la dynamique. Il fournit directement la densité de "l'énergie libre" et la divergence de Kullback-Leibler entre les statistiques empiriques et le modèle statistique. Il ne se limite pas une forme de potentiel de Gibbs spécifique, mais permet de le chosir, et ne nécessite pas l'hypothèse de "detailed balance". En outre, il permet la comparaison des différents modèles statistiques et offre un contrôle des effets de l'échantillonnage de taille finie, inhérente aux statistiques empiriques, en se basant sur des résultats de grandes déviation. Une validation numérique de la méthode est proposée et les perspectives en matière d'analyse de trains de spike sont également discutées
    corecore