259 research outputs found

    Estimating the Amount of Information Conveyed by a Population of Neurons

    Get PDF
    Recent technological advances have made the simultaneous recording of the activity of many neurons common. However, estimating the amount of information conveyed by the discharge of a neural population remains a significant challenge. Here we describe our recently published analysis method that assists in such estimates. We describe the key concepts and assumptions on which the method is based, illustrate its use with data from both simulated and real neurons recorded from the lateral geniculate nucleus of a monkey, and show how it can be used to calculate redundancy and synergy among neuronal groups

    Estimating the Amount of Information Conveyed by a Population of Neurons

    Get PDF
    Recent advances in electrophysiological recording technology have allowed for the collection of data from large populations of neurons simultaneously. Yet despite these advances, methods for the estimation of the amount of information conveyed by multiple neurons have been stymied by the “curse of dimensionality”–as the number of included neurons increases, so too does the dimensionality of the data necessary for such measurements, leading to an exponential and, therefore, intractible increase in the amounts of data required for valid measurements. Here we put forth a novel method for the estimation of the amount of information transmitted by the discharge of a large population of neurons, a method which exploits the little-known fact that (under certain constraints) the Fourier coefficients of variables such as neural spike trains follow a Gaussian distribution. This fact enables an accurate measure of information even with limited data. The method, which we call the Fourier Method, is presented in detail, tested for robustness, and its application is demonstrated with both simulated and real spike trains. ii

    Population coding in the primary visual cortex

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Low-frequency local field potentials and spikes in primary visual cortex convey independent visual information

    Get PDF
    Local field potentials (LFPs) reflect subthreshold integrative processes that complement spike train measures. However, little is yet known about the differences between how LFPs and spikes encode rich naturalistic sensory stimuli. We addressed this question by recording LFPs and spikes from the primary visual cortex of anesthetized macaques while presenting a color movie.Wethen determined how the power of LFPs and spikes at different frequencies represents the visual features in the movie.Wefound that the most informative LFP frequency ranges were 1– 8 and 60 –100 Hz. LFPs in the range of 12– 40 Hz carried little information about the stimulus, and may primarily reflect neuromodulatory inputs. Spike power was informative only at frequencies <12 Hz. We further quantified “signal correlations” (correlations in the trial-averaged power response to different stimuli) and “noise correlations” (trial-by-trial correlations in the fluctuations around the average) of LFPs and spikes recorded from the same electrode. We found positive signal correlation between high-gamma LFPs (60 –100 Hz) and spikes, as well as strong positive signal correlation within high-gamma LFPs, suggesting that high-gamma LFPs and spikes are generated within the same network. LFPs<24 Hz shared strong positive noise correlations, indicating that they are influenced by a common source, such as a diffuse neuromodulatory input. LFPs<40 Hz showed very little signal and noise correlations with LFPs>40Hzand with spikes, suggesting that low-frequency LFPs reflect neural processes that in natural conditions are fully decoupled from those giving rise to spikes and to high-gamma LFPs

    A Bivariate Measure of Redundant Information

    Get PDF
    We define a measure of redundant information based on projections in the space of probability distributions. Redundant information between random variables is information that is shared between those variables. But in contrast to mutual information, redundant information denotes information that is shared about the outcome of a third variable. Formalizing this concept, and being able to measure it, is required for the non-negative decomposition of mutual information into redundant and synergistic information. Previous attempts to formalize redundant or synergistic information struggle to capture some desired properties. We introduce a new formalism for redundant information and prove that it satisfies all the properties necessary outlined in earlier work, as well as an additional criterion that we propose to be necessary to capture redundancy. We also demonstrate the behaviour of this new measure for several examples, compare it to previous measures and apply it to the decomposition of transfer entropy.Comment: 16 pages, 15 figures, 1 table, added citation to Griffith et al 2012, Maurer et al 199

    Quantifying synergistic mutual information

    Get PDF
    Quantifying cooperation or synergy among random variables in predicting a single target random variable is an important problem in many complex systems. We review three prior information-theoretic measures of synergy and introduce a novel synergy measure defined as the difference between the whole and the union of its parts. We apply all four measures against a suite of binary circuits to demonstrate that our measure alone quantifies the intuitive concept of synergy across all examples. We show that for our measure of synergy that independent predictors can have positive redundant information.Comment: 15 pages; 12 page appendix. Lots of figures. Guided Self Organization: Inception. Ed: Mikhail Prokopenko. (2014); ISBN 978-3-642-53734-
    corecore