8,420 research outputs found

    Encoding of Intention and Spatial Location in the Posterior Parietal Cortex

    Get PDF
    The posterior parietal cortex is functionally situated between sensory cortex and motor cortex. The responses of cells in this area are difficult to classify as strictly sensory or motor, since many have both sensory- and movement-related activities, as well as activities related to higher cognitive functions such as attention and intention. In this review we will provide evidence that the posterior parietal cortex is an interface between sensory and motor structures and performs various functions important for sensory-motor integration. The review will focus on two specific sensory-motor tasks-the formation of motor plans and the abstract representation of space. Cells in the lateral intraparietal area, a subdivision of the parietal cortex, have activity related to eye movements the animal intends to make. This finding represents the lowest stage in the sensory-motor cortical pathway in which activity related to intention has been found and may represent the cortical stage in which sensory signals go "over the hump" to become intentions and plans to make movements. The second part of the review will discuss the representation of space in the posterior parietal cortex. Encoding spatial locations is an essential step in sensory-motor transformations. Since movements are made to locations in space, these locations should be coded invariant of eye and head position or the sensory modality signaling the target for a movement Data will be reviewed demonstrating that there exists in the posterior parietal cortex an abstract representation of space that is constructed from the integration of visual, auditory, vestibular, eye position, and propriocaptive head position signals. This representation is in the form of a population code and the above signals are not combined in a haphazard fashion. Rather, they are brought together using a specific operation to form "planar gain fields" that are the common foundation of the population code for the neural construct of space

    A Neural Model of How Horizontal and Interlaminar Connections of Visual Cortex Develop into Adult Circuits that Carry Out Perceptual Grouping and Learning

    Full text link
    A neural model suggests how horizontal and interlaminar connections in visual cortical areas Vl and V2 develop within a laminar cortical architecture and give rise to adult visual percepts. The model suggests how mechanisms that control cortical development in the infant lead to properties of adult cortical anatomy, neurophysiology, and visual perception. The model clarifies how excitatory and inhibitory connections can develop stably by maintaining a balance between excitation and inhibition. The growth of long-range excitatory horizontal connections between layer 2/3 pyramidal cells is balanced against that of short-range disynaptic interneuronal connections. The growth of excitatory on-center connections from layer 6-to-4 is balanced against that of inhibitory interneuronal off-surround connections. These balanced connections interact via intracortical and intercortical feedback to realize properties of perceptual grouping, attention, and perceptual learning in the adult, and help to explain the observed variability in the number and temporal distribution of spikes emitted by cortical neurons. The model replicates cortical point spread functions and psychophysical data on the strength of real and illusory contours. The on-center off-surround layer 6-to-4 circuit enables top-clown attentional signals from area V2 to modulate, or attentionally prime, layer 4 cells in area Vl without fully activating them. This modulatory circuit also enables adult perceptual learning within cortical area Vl and V2 to proceed in a stable way.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    Modeling Reverse-Phi Motion-Selective Neurons in Cortex: Double Synaptic-Veto Mechanism

    Get PDF
    Reverse-phi motion is the illusory reversal of perceived direction of movement when the stimulus contrast is reversed in successive frames. Livingstone, Tsao, and Conway (2000) showed that direction-selective cells in striate cortex of the alert macaque monkey showed reversed excitatory and inhibitory regions when two different contrast bars were flashed sequentially during a two-bar interaction analysis. While correlation or motion energy models predict the reverse-phi response, it is unclear how neurons can accomplish this. We carried out detailed biophysical simulations of a direction-selective cell model implementing a synaptic shunting scheme. Our results suggest that a simple synaptic-veto mechanism with strong direction selectivity for normal motion cannot account for the observed reverse-phi motion effect. Given the nature of reverse-phi motion, a direct interaction between the ON and OFF pathway, missing in the original shunting-inhibition model, it is essential to account for the reversal of response. We here propose a double synaptic-veto mechanism in which ON excitatory synapses are gated by both delayed ON inhibition at their null side and delayed OFF inhibition at their preferred side. The converse applies to OFF excitatory synapses. Mapping this scheme onto the dendrites of a direction-selective neuron permits the model to respond best to normal motion in its preferred direction and to reverse-phi motion in its null direction. Two-bar interaction maps showed reversed excitation and inhibition regions when two different contrast bars are presented

    Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

    Full text link
    One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201
    corecore