9 research outputs found

    State Dependence of Stimulus-Induced Variability Tuning in Macaque MT

    Full text link
    Behavioral states marked by varying levels of arousal and attention modulate some properties of cortical responses (e.g. average firing rates or pairwise correlations), yet it is not fully understood what drives these response changes and how they might affect downstream stimulus decoding. Here we show that changes in state modulate the tuning of response variance-to-mean ratios (Fano factors) in a fashion that is neither predicted by a Poisson spiking model nor changes in the mean firing rate, with a substantial effect on stimulus discriminability. We recorded motion-sensitive neurons in middle temporal cortex (MT) in two states: alert fixation and light, opioid anesthesia. Anesthesia tended to lower average spike counts, without decreasing trial-to-trial variability compared to the alert state. Under anesthesia, within-trial fluctuations in excitability were correlated over longer time scales compared to the alert state, creating supra-Poisson Fano factors. In contrast, alert-state MT neurons have higher mean firing rates and largely sub-Poisson variability that is stimulus-dependent and cannot be explained by firing rate differences alone. The absence of such stimulus-induced variability tuning in the anesthetized state suggests different sources of variability between states. A simple model explains state-dependent shifts in the distribution of observed Fano factors via a suppression in the variance of gain fluctuations in the alert state. A population model with stimulus-induced variability tuning and behaviorally constrained information-limiting correlations explores the potential enhancement in stimulus discriminability by the cortical population in the alert state.Comment: 36 pages, 18 figure

    Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics

    Get PDF
    The advent of neuromorphic electronics is increasingly revolutionizing the concept of computation. In the last decade, several studies have shown how materials, architectures, and neuromorphic devices can be leveraged to achieve brain-like computation with limited power consumption and high energy efficiency. Neuromorphic systems have been mainly conceived to support spiking neural networks that embed bioinspired plasticity rules such as spike time-dependent plasticity to potentially support both unsupervised and supervised learning. Despite substantial progress in the field, the information transfer capabilities of biological circuits have not yet been achieved. More importantly, demonstrations of the actual performance of neuromorphic systems in this context have never been presented. In this paper, we report similarities between biological, simulated, and artificially reconstructed microcircuits in terms of information transfer from a computational perspective. Specifically, we extensively analyzed the mutual information transfer at the synapse between mossy fibers and granule cells by measuring the relationship between pre- and post-synaptic variability. We extended this analysis to memristor synapses that embed rate-based learning rules, thus providing quantitative validation for neuromorphic hardware and demonstrating the reliability of brain-inspired applications

    Propagation of activity through the cortical hierarchy and perception are determined by neural variability

    Get PDF
    Brains are composed of anatomically and functionally distinct regions performing specialized tasks, but regions do not operate in isolation. Orchestration of complex behaviors requires communication between brain regions, but how neural dynamics are organized to facilitate reliable transmission is not well understood. Here we studied this process directly by generating neural activity that propagates between brain regions and drives behavior, assessing how neural populations in sensory cortex cooperate to transmit information. We achieved this by imaging two densely interconnected regions—the primary and secondary somatosensory cortex (S1 and S2)—in mice while performing two-photon photostimulation of S1 neurons and assigning behavioral salience to the photostimulation. We found that the probability of perception is determined not only by the strength of the photostimulation but also by the variability of S1 neural activity. Therefore, maximizing the signal-to-noise ratio of the stimulus representation in cortex relative to the noise or variability is critical to facilitate activity propagation and perception

    Neuromodulation influences synchronization and intrinsic read-out [version 2; referees: 2 approved, 1 approved with reservations, 1 not approved]

    Get PDF
    Background: The roles of neuromodulation in a neural network, such as in a cortical microcolumn, are still incompletely understood. Neuromodulation influences neural processing by presynaptic and postsynaptic regulation of synaptic efficacy. Neuromodulation also affects ion channels and intrinsic excitability. Methods: Synaptic efficacy modulation is an effective way to rapidly alter network density and topology. We alter network topology and density to measure the effect on spike synchronization. We also operate with differently parameterized neuron models which alter the neuron's intrinsic excitability, i.e., activation function. Results: We find that (a) fast synaptic efficacy modulation influences the amount of correlated spiking in a network. Also, (b) synchronization in a network influences the read-out of intrinsic properties. Highly synchronous input drives neurons, such that differences in intrinsic properties disappear, while asynchronous input lets intrinsic properties determine output behavior. Thus, altering network topology can alter the balance between intrinsically vs. synaptically driven network activity. Conclusion: We conclude that neuromodulation may allow a network to shift between a more synchronized transmission mode and a more asynchronous intrinsic read-out mode. This has significant implications for our understanding of the flexibility of cortical computations

    Information-based Analysis and Control of Recurrent Linear Networks and Recurrent Networks with Sigmoidal Nonlinearities

    Get PDF
    Linear dynamical models have served as an analytically tractable approximation for a variety of natural and engineered systems. Recently, such models have been used to describe high-level diffusive interactions in the activation of complex networks, including those in the brain. In this regard, classical tools from control theory, including controllability analysis, have been used to assay the extent to which such networks might respond to their afferent inputs. However, for natural systems such as brain networks, it is not clear whether advantageous control properties necessarily correspond to useful functionality. That is, are systems that are highly controllable (according to certain metrics) also ones that are suited to computational goals such as representing, preserving and categorizing stimuli? This dissertation will introduce analysis methods that link the systems-theoretic properties of linear systems with informational measures that describe these functional characterizations. First, we assess sensitivity of a linear system to input orientation and novelty by deriving a measure of how networks translate input orientation differences into readable state trajectories. Next, we explore the implications of this novelty-sensitivity for endpoint-based input discrimination, wherein stimuli are decoded in terms of their induced representation in the state space. We develop a theoretical framework for the exploration of how networks utilize excess input energy to enhance orientation sensitivity (and thus enhanced discrimination ability). Next, we conduct a theoretical study to reveal how the background or default state of a network with linear dynamics allows it to best promote discrimination over a continuum of stimuli. Specifically, we derive a measure, based on the classical notion of a Fisher discriminant, quantifying the extent to which the state of a network encodes information about its afferent inputs. This measure provides an information value quantifying the knowablility of an input based on its projection onto the background state. We subsequently optimize this background state, and characterize both the optimal background and the inputs giving it rise. Finally, we extend this information-based network analysis to include networks with nonlinear dynamics--specifically, ones involving sigmoidal saturating functions. We employ a quasilinear approximation technique, novel here in terms of its multidimensionality and specific application, to approximate the nonlinear dynamics by scaling a corresponding linear system and biasing by an offset term. A Fisher information-based metric is derived for the quasilinear system, with analytical and numerical results showing that Fisher information is better for the quasilinear (hence sigmoidal) system than for an unconstrained linear system. Interestingly, this relation reverses when the noise is placed outside the sigmoid in the model, supporting conclusions extant in the literature that the relative alignment of the state and noise covariance is predictive of Fisher information. We show that there exists a clear trade-off between informational advantage, as conferred by the presence of sigmoidal nonlinearities, and speed of dynamics

    Robust information propagation through noisy neural circuits

    Get PDF
    Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina's performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with "differential correlations", which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different

    Testing the predictive coding account of temporal integration in the human visual system : a computational and behavioural study

    Get PDF
    A major goal of vision science is to understand how the visual system maintains behaviourally relevant perceptions given the level of uncertainty in the signals it receives. One proposed solution is that the visual system applies predictive coding to its inputs based on the integration of prior knowledge and current stimulus features. However, support for some vital aspects of predictive coding in the temporal domain is lacking and simpler accounts of temporal integration also exist. The aim of this thesis was to test two key attributes of predictive coding in time a) does the visual system apply adaptive weighting to prediction errors and b) can the visual system apply probabilistic information learnt from stimulus sequences when making predictions. In chapters 3 & 4, we tested predictive coding’s ideas of how prediction errors are weighted under the theoretical guidance of a temporal integration model linked to predictive processing, called the Kalman filter. Here, both experiments supported predictive coding. We showed that, consistent with the Kalman filter, visual estimates and the way estimation errors were corrected, adapted to stimulus behaviour and viewing conditions. In chapter 5, we assessed the ability of the visual system to integrate conditional relationships present in sequences of stimuli when making predictions. To do this, we inserted a stimulus sequence that changed and omitted trials based on Markov transition probabilities that made some transitions more or less probable and assessed reaction times and omission trial responses. Reaction time data was consistent with predictive coding, in that more predictable changes elicited faster responses. Omission trials data, was though, less clear. When faced with no stimulus, participants did not apply the conditional probabilities in their decisions optimally, instead applying non optimal decision strategies, inconsistent with predictive coding. In summary, this thesis supports the predictive coding of temporal integration but questions its application in all situations
    corecore