400 research outputs found

    Adaptive integration in the visual cortex by depressing recurrent cortical circuits

    Get PDF
    Neurons in the visual cortex receive a large amount of input from recurrent connections, yet the functional role of these connections remains unclear. Here we explore networks with strong recurrence in a computational model and show that short-term depression of the synapses in the recurrent loops implements an adaptive filter. This allows the visual system to respond reliably to deteriorated stimuli yet quickly to high-quality stimuli. For low-contrast stimuli, the model predicts long response latencies, whereas latencies are short for high-contrast stimuli. This is consistent with physiological data showing that in higher visual areas, latencies can increase more than 100 ms at low contrast compared to high contrast. Moreover, when presented with briefly flashed stimuli, the model predicts stereotypical responses that outlast the stimulus, again consistent with physiological findings. The adaptive properties of the model suggest that the abundant recurrent connections found in visual cortex serve to adapt the network's time constant in accordance with the stimulus and normalizes neuronal signals such that processing is as fast as possible while maintaining reliabilit

    How does the Cerebral Cortex Work? Learning, Attention, and Grouping by the Laminar Circuits of Visual Cortex

    Full text link
    The organization of neocortex into layers is one of its most salient anatomical features. These layers include circuits that form functional columns in cortical maps. A major unsolved problem concerns how bottom-up, top-down, and horizontal interactions are organized within cortical layers to generate adaptive behaviors. This article models how these interactions help visual co1tex to realize: (I) the binding process whereby cortex groups distributed data into coherent object representations; (2) the attentional process whereby cortex selectively processes important events; and (3) the developmental and learning processes whereby cortex shapes its circuits to match environmental constraints. New computational ideas about feedback systems suggest how neocortex develops and learns in a stable way, and why top-down attention requires converging bottom-up inputs to fully activate cortical cells, whereas perceptual groupings do not.Defense Advanced Research Projects Agency; National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    Integrated Mechanisms of Anticipation and Rate-of-Change Computations in Cortical Circuits

    Get PDF
    Local neocortical circuits are characterized by stereotypical physiological and structural features that subserve generic computational operations. These basic computations of the cortical microcircuit emerge through the interplay of neuronal connectivity, cellular intrinsic properties, and synaptic plasticity dynamics. How these interacting mechanisms generate specific computational operations in the cortical circuit remains largely unknown. Here, we identify the neurophysiological basis of both the rate of change and anticipation computations on synaptic inputs in a cortical circuit. Through biophysically realistic computer simulations and neuronal recordings, we show that the rate-of-change computation is operated robustly in cortical networks through the combination of two ubiquitous brain mechanisms: short-term synaptic depression and spike-frequency adaptation. We then show how this rate-of-change circuit can be embedded in a convergently connected network to anticipate temporally incoming synaptic inputs, in quantitative agreement with experimental findings on anticipatory responses to moving stimuli in the primary visual cortex. Given the robustness of the mechanism and the widespread nature of the physiological machinery involved, we suggest that rate-of-change computation and temporal anticipation are principal, hard-wired functions of neural information processing in the cortical microcircuit

    Handwritten digit recognition by bio-inspired hierarchical networks

    Full text link
    The human brain processes information showing learning and prediction abilities but the underlying neuronal mechanisms still remain unknown. Recently, many studies prove that neuronal networks are able of both generalizations and associations of sensory inputs. In this paper, following a set of neurophysiological evidences, we propose a learning framework with a strong biological plausibility that mimics prominent functions of cortical circuitries. We developed the Inductive Conceptual Network (ICN), that is a hierarchical bio-inspired network, able to learn invariant patterns by Variable-order Markov Models implemented in its nodes. The outputs of the top-most node of ICN hierarchy, representing the highest input generalization, allow for automatic classification of inputs. We found that the ICN clusterized MNIST images with an error of 5.73% and USPS images with an error of 12.56%

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Neural field model of binocular rivalry waves

    Get PDF
    We present a neural field model of binocular rivalry waves in visual cortex. For each eye we consider a one–dimensional network of neurons that respond maximally to a particular feature of the corresponding image such as the orientation of a grating stimulus. Recurrent connections within each one-dimensional network are assumed to be excitatory, whereas connections between the two networks are inhibitory (cross-inhibition). Slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We derive an analytical expression for the speed of a binocular rivalry wave as a function of various neurophysiological parameters, and show how properties of the wave are consistent with the wave–like propagation of perceptual dominance observed in recent psychophysical experiments. In addition to providing an analytical framework for studying binocular rivalry waves, we show how neural field methods provide insights into the mechanisms underlying the generation of the waves. In particular, we highlight the important role of slow adaptation in providing a “symmetry breaking mechanism” that allows waves to propagate

    The Complementary Brain: From Brain Dynamics To Conscious Experiences

    Full text link
    How do our brains so effectively achieve adaptive behavior in a changing world? Evidence is reviewed that brains are organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact, and suggests an alternative to the computer metaphor suggesting that brains are organized into independent modules. Examples from perception, learning, cognition, and action are described, and theoretical concepts and mechanisms by which complementarity is accomplished are summarized.Defense Advanced Research Projects and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-1-0657

    Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions

    Full text link
    A neural model is presented that explains how outcome-specific learning modulates affect, decision-making and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact, and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex.National Science Foundation (SBE-0354378; IIS-97-20333); Office of Naval Research (N00014-01-1-0624); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952
    • 

    corecore