4,607 research outputs found

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception

    Full text link
    Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition

    Predictive Coding as a Model of Biased Competition in Visual Attention

    Get PDF
    Attention acts, through cortical feedback pathways, to enhance the response of cells encoding expected or predicted information. Such observations are inconsistent with the predictive coding theory of cortical function which proposes that feedback acts to suppress information predicted by higher-level cortical regions. Despite this discrepancy, this article demonstrates that the predictive coding model can be used to simulate a number of the effects of attention. This is achieved via a simple mathematical rearrangement of the predictive coding model, which allows it to be interpreted as a form of biased competition model. Nonlinear extensions to the model are proposed that enable it to explain a wider range of data

    State Dependence of Stimulus-Induced Variability Tuning in Macaque MT

    Full text link
    Behavioral states marked by varying levels of arousal and attention modulate some properties of cortical responses (e.g. average firing rates or pairwise correlations), yet it is not fully understood what drives these response changes and how they might affect downstream stimulus decoding. Here we show that changes in state modulate the tuning of response variance-to-mean ratios (Fano factors) in a fashion that is neither predicted by a Poisson spiking model nor changes in the mean firing rate, with a substantial effect on stimulus discriminability. We recorded motion-sensitive neurons in middle temporal cortex (MT) in two states: alert fixation and light, opioid anesthesia. Anesthesia tended to lower average spike counts, without decreasing trial-to-trial variability compared to the alert state. Under anesthesia, within-trial fluctuations in excitability were correlated over longer time scales compared to the alert state, creating supra-Poisson Fano factors. In contrast, alert-state MT neurons have higher mean firing rates and largely sub-Poisson variability that is stimulus-dependent and cannot be explained by firing rate differences alone. The absence of such stimulus-induced variability tuning in the anesthetized state suggests different sources of variability between states. A simple model explains state-dependent shifts in the distribution of observed Fano factors via a suppression in the variance of gain fluctuations in the alert state. A population model with stimulus-induced variability tuning and behaviorally constrained information-limiting correlations explores the potential enhancement in stimulus discriminability by the cortical population in the alert state.Comment: 36 pages, 18 figure

    Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy

    Get PDF
    Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy

    Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its role

    Get PDF
    The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN . These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real-world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high-level visual processing

    A Computational Study Of The Role Of Spatial Receptive Field Structure In Processing Natural And Non-Natural Scenes

    Get PDF
    The center-surround receptive field structure, ubiquitous in the visual system, is hypothesized to be evolutionarily advantageous in image processing tasks. We address the potential functional benefits and shortcomings of spatial localization and center-surround antagonism in the context of an integrate-and-fire neuronal network model with image-based forcing. Utilizing the sparsity of natural scenes, we derive a compressive-sensing framework for input image reconstruction utilizing evoked neuronal firing rates. We investigate how the accuracy of input encoding depends on the receptive field architecture, and demonstrate that spatial localization in visual stimulus sampling facilitates marked improvements in natural scene processing beyond uniformly-random excitatory connectivity. However, for specific classes of images, we show that spatial localization inherent in physiological receptive fields combined with information loss through nonlinear neuronal network dynamics may underlie common optical illusions, giving a novel explanation for their manifestation. In the context of signal processing, we expect this work may suggest new sampling protocols useful for extending conventional compressive sensing theory
    corecore