293 research outputs found

    Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

    Full text link
    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure

    Dynamical criticality in the collective activity of a population of retinal neurons

    Full text link
    Recent experimental results based on multi-electrode and imaging techniques have reinvigorated the idea that large neural networks operate near a critical point, between order and disorder. However, evidence for criticality has relied on the definition of arbitrary order parameters, or on models that do not address the dynamical nature of network activity. Here we introduce a novel approach to assess criticality that overcomes these limitations, while encompassing and generalizing previous criteria. We find a simple model to describe the global activity of large populations of ganglion cells in the rat retina, and show that their statistics are poised near a critical point. Taking into account the temporal dynamics of the activity greatly enhances the evidence for criticality, revealing it where previous methods would not. The approach is general and could be used in other biological networks

    A tractable method for describing complex couplings between neurons and population rate

    Full text link
    Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, tractable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the linear coupling between them. This model is tractable, meaning that its parameters can be learned in a few seconds on a standard computer even for large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies could not be explained by a linear coupling between the cell and the population rate. We designed a more general, still tractable model that could fully account for these non-linear dependencies. We thus provide a simple and computationally tractable way to learn models that reproduce the dependence of each neuron on the population rate

    Blindfold learning of an accurate neural metric

    Full text link
    The brain has no direct access to physical stimuli, but only to the spiking activity evoked in sensory organs. It is unclear how the brain can structure its representation of the world based on differences between those noisy, correlated responses alone. Here we show how to build a distance map of responses from the structure of the population activity of retinal ganglion cells, allowing for the accurate discrimination of distinct visual stimuli from the retinal response. We introduce the Temporal Restricted Boltzmann Machine to learn the spatiotemporal structure of the population activity, and use this model to define a distance between spike trains. We show that this metric outperforms existing neural distances at discriminating pairs of stimuli that are barely distinguishable. The proposed method provides a generic and biologically plausible way to learn to associate similar stimuli based on their spiking responses, without any other knowledge of these stimuli

    Closed-loop estimation of retinal network sensitivity reveals signature of efficient coding

    Full text link
    According to the theory of efficient coding, sensory systems are adapted to represent natural scenes with high fidelity and at minimal metabolic cost. Testing this hypothesis for sensory structures performing non-linear computations on high dimensional stimuli is still an open challenge. Here we develop a method to characterize the sensitivity of the retinal network to perturbations of a stimulus. Using closed-loop experiments, we explore selectively the space of possible perturbations around a given stimulus. We then show that the response of the retinal population to these small perturbations can be described by a local linear model. Using this model, we computed the sensitivity of the neural response to arbitrary temporal perturbations of the stimulus, and found a peak in the sensitivity as a function of the frequency of the perturbations. Based on a minimal theory of sensory processing, we argue that this peak is set to maximize information transmission. Our approach is relevant to testing the efficient coding hypothesis locally in any context where no reliable encoding model is known

    A simple model for low variability in neural spike trains

    Full text link
    Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, which can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters, but can reproduce the observed variability in retinal recordings in various conditions. We show analytically why this approximation can work. In a model of the spike emitting process where a refractory period is assumed, we derive that our simple correction can well approximate the spike train statistics over a broad range of firing rates. Our model can be easily plugged to stimulus processing models, like Linear-nonlinear model or its generalizations, to replace the Poisson spike train hypothesis that is commonly assumed. It estimates the amount of information transmitted much more accurately than Poisson models in retinal recordings. Thanks to its simplicity this model has the potential to explain low variability in other areas

    Pairwise Ising model analysis of human cortical neuron recordings

    Full text link
    During wakefulness and deep sleep brain states, cortical neural networks show a different behavior, with the second characterized by transients of high network activity. To investigate their impact on neuronal behavior, we apply a pairwise Ising model analysis by inferring the maximum entropy model that reproduces single and pairwise moments of the neuron's spiking activity. In this work we first review the inference algorithm introduced in Ferrari,Phys. Rev. E (2016). We then succeed in applying the algorithm to infer the model from a large ensemble of neurons recorded by multi-electrode array in human temporal cortex. We compare the Ising model performance in capturing the statistical properties of the network activity during wakefulness and deep sleep. For the latter, the pairwise model misses relevant transients of high network activity, suggesting that additional constraints are necessary to accurately model the data.Comment: 8 pages, 3 figures, Geometric Science of Information 2017 conferenc
    corecore