18,546 research outputs found

    The information transmitted by spike patterns in single neurons

    Full text link
    Spike patterns have been reported to encode sensory information in several brain areas. Here we assess the role of specific patterns in the neural code, by comparing the amount of information transmitted with different choices of the readout neural alphabet. This allows us to rank several alternative alphabets depending on the amount of information that can be extracted from them. One can thereby identify the specific patterns that constitute the most prominent ingredients of the code. We finally discuss the interplay of categorical and temporal information in the amount of synergy or redundancy in the neural code.Comment: To be published in Journal of Physiology Paris 200

    Information transmission in oscillatory neural activity

    Full text link
    Periodic neural activity not locked to the stimulus or to motor responses is usually ignored. Here, we present new tools for modeling and quantifying the information transmission based on periodic neural activity that occurs with quasi-random phase relative to the stimulus. We propose a model to reproduce characteristic features of oscillatory spike trains, such as histograms of inter-spike intervals and phase locking of spikes to an oscillatory influence. The proposed model is based on an inhomogeneous Gamma process governed by a density function that is a product of the usual stimulus-dependent rate and a quasi-periodic function. Further, we present an analysis method generalizing the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the information content in such data. We demonstrate these tools on recordings from relay cells in the lateral geniculate nucleus of the cat.Comment: 18 pages, 8 figures, to appear in Biological Cybernetic

    Implications of single-neuron gain scaling for information transmission in networks

    Get PDF
    Summary: 

Many neural systems are equipped with mechanisms to efficiently encode sensory information. To represent natural stimuli with time-varying statistical properties, neural systems should adjust their gain to the inputs' statistical distribution. Such matching of dynamic range to input statistics has been shown to maximize the information transmitted by the output spike trains (Brenner et al., 2000, Fairhall et al., 2001). Gain scaling has not only been observed as a system response property, but also in single neurons in developing somatosensory cortex stimulated with currents of different amplitude (Mease et al., 2010). While gain scaling holds for cortical neurons at the end of the first post-natal week, at birth these neurons lack this property. The observed improvement in gain scaling coincides with the disappearance of spontaneous waves of activity in cortex (Conheim et al., 2010).

We studied how single-neuron gain scaling affects the dynamics of signal transmission in networks, using the developing cortex as a model. In a one-layer feedforward network, we showed that the absence of gain control made the network relatively insensitive to uncorrelated local input fluctuations. As a result, these neurons selectively and synchronously responded to large slowly-varying correlated input--the slow build up of synaptic noise generated in pacemaker circuits which most likely triggers waves. Neurons in gain scaling networks were more sensitive to the small-scale input fluctuations, and responded asynchronously to the slow envelope. Thus, gain scaling both increases information in individual neurons about private inputs and allows the population average to encode the slow fluctuations in the input. Paradoxically, the synchronous firing that corresponds to wave propagation is associated with low information transfer. We therefore suggest that the emergence of gain scaling may help the system to increase information transmission on multiple timescales as sensory stimuli become important later in development. 

Methods:

Networks with one and two layers consisting of hundreds of model neurons were constructed. The ability of single neurons to gain scale was controlled by changing the ratio of sodium to potassium conductances in Hodgkin-Huxley neurons (Mainen et al., 1995). The response of single layer networks was studied with ramp-like stimuli with slopes that varied over several hundreds of milliseconds. Fast fluctuations were superimposed on this slowly-varying mean. Then the response to these networks was tested with continuous stimuli. Gain scaling networks captured the slow fluctuations in the inputs, while non-scaling networks simply thresholded the input. Quantifying information transmission confirmed that gain scaling neurons transmit more information about the stimulus. With the two-layer networks we simulated a cortical network where waves could spontaneously emerge, propagate and degrade, based on the gain scaling properties of the neurons in the network

    Homunculus strides again: why ‘information transmitted’ in neuroscience tells us nothing

    Get PDF
    Purpose – For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues. Design/methodology/approach – Shannon Information Theory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ interpretation of that model is scrutinized here. Findings – In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.” Research limitations/implications – Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal. Originality/value – A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon Information Theory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.

    Applications of Information Theory to Analysis of Neural Data

    Full text link
    Information theory is a practical and theoretical framework developed for the study of communication over noisy channels. Its probabilistic basis and capacity to relate statistical structure to function make it ideally suited for studying information flow in the nervous system. It has a number of useful properties: it is a general measure sensitive to any relationship, not only linear effects; it has meaningful units which in many cases allow direct comparison between different experiments; and it can be used to study how much information can be gained by observing neural responses in single trials, rather than in averages over multiple trials. A variety of information theoretic quantities are commonly used in neuroscience - (see entry "Definitions of Information-Theoretic Quantities"). In this entry we review some applications of information theory in neuroscience to study encoding of information in both single neurons and neuronal populations.Comment: 8 pages, 2 figure
    • 

    corecore