75,440 research outputs found

    Revisiting chaos in stimulus-driven spiking networks: signal encoding and discrimination

    Full text link
    Highly connected recurrent neural networks often produce chaotic dynamics, meaning their precise activity is sensitive to small perturbations. What are the consequences for how such networks encode streams of temporal stimuli? On the one hand, chaos is a strong source of randomness, suggesting that small changes in stimuli will be obscured by intrinsically generated variability. On the other hand, recent work shows that the type of chaos that occurs in spiking networks can have a surprisingly low-dimensional structure, suggesting that there may be "room" for fine stimulus features to be precisely resolved. Here we show that strongly chaotic networks produce patterned spikes that reliably encode time-dependent stimuli: using a decoder sensitive to spike times on timescales of 10's of ms, one can easily distinguish responses to very similar inputs. Moreover, recurrence serves to distribute signals throughout chaotic networks so that small groups of cells can encode substantial information about signals arriving elsewhere. A conclusion is that the presence of strong chaos in recurrent networks does not prohibit precise stimulus encoding.Comment: 8 figure

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    State Dependence of Stimulus-Induced Variability Tuning in Macaque MT

    Full text link
    Behavioral states marked by varying levels of arousal and attention modulate some properties of cortical responses (e.g. average firing rates or pairwise correlations), yet it is not fully understood what drives these response changes and how they might affect downstream stimulus decoding. Here we show that changes in state modulate the tuning of response variance-to-mean ratios (Fano factors) in a fashion that is neither predicted by a Poisson spiking model nor changes in the mean firing rate, with a substantial effect on stimulus discriminability. We recorded motion-sensitive neurons in middle temporal cortex (MT) in two states: alert fixation and light, opioid anesthesia. Anesthesia tended to lower average spike counts, without decreasing trial-to-trial variability compared to the alert state. Under anesthesia, within-trial fluctuations in excitability were correlated over longer time scales compared to the alert state, creating supra-Poisson Fano factors. In contrast, alert-state MT neurons have higher mean firing rates and largely sub-Poisson variability that is stimulus-dependent and cannot be explained by firing rate differences alone. The absence of such stimulus-induced variability tuning in the anesthetized state suggests different sources of variability between states. A simple model explains state-dependent shifts in the distribution of observed Fano factors via a suppression in the variance of gain fluctuations in the alert state. A population model with stimulus-induced variability tuning and behaviorally constrained information-limiting correlations explores the potential enhancement in stimulus discriminability by the cortical population in the alert state.Comment: 36 pages, 18 figure

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure
    corecore