2,939 research outputs found

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    EyeRIS User's Manual

    Full text link

    Enhancing retinal images by nonlinear registration

    Full text link
    Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging at very high spatial resolution in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing retinal images and detect abnormal features. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be of high interest in a medical diagnosis.Comment: 21 pages, 7 figures, submitted to Optics Communication

    Temporal structure in neuronal activity during working memory in Macaque parietal cortex

    Full text link
    A number of cortical structures are reported to have elevated single unit firing rates sustained throughout the memory period of a working memory task. How the nervous system forms and maintains these memories is unknown but reverberating neuronal network activity is thought to be important. We studied the temporal structure of single unit (SU) activity and simultaneously recorded local field potential (LFP) activity from area LIP in the inferior parietal lobe of two awake macaques during a memory-saccade task. Using multitaper techniques for spectral analysis, which play an important role in obtaining the present results, we find elevations in spectral power in a 50--90 Hz (gamma) frequency band during the memory period in both SU and LFP activity. The activity is tuned to the direction of the saccade providing evidence for temporal structure that codes for movement plans during working memory. We also find SU and LFP activity are coherent during the memory period in the 50--90 Hz gamma band and no consistent relation is present during simple fixation. Finally, we find organized LFP activity in a 15--25 Hz frequency band that may be related to movement execution and preparatory aspects of the task. Neuronal activity could be used to control a neural prosthesis but SU activity can be hard to isolate with cortical implants. As the LFP is easier to acquire than SU activity, our finding of rich temporal structure in LFP activity related to movement planning and execution may accelerate the development of this medical application.Comment: Originally submitted to the neuro-sys archive which was never publicly announced (was 0005002

    Multimodal Representation of Space in the Posterior Parietal Cortex and its use in Planning Movements

    Get PDF
    Recent experiments are reviewed that indicate that sensory signals from many modalities, as well as efference copy signals from motor structures, converge in the posterior parietal cortex in order to code the spatial locations of goals for movement. These signals are combined using a specific gain mechanism that enables the different coordinate frames of the various input signals to be combined into common, distributed spatial representations. These distributed representations can be used to convert the sensory locations of stimuli into the appropriate motor coordinates required for making directed movements. Within these spatial representations of the posterior parietal cortex are neural activities related to higher cognitive functions, including attention. We review recent studies showing that the encoding of intentions to make movements is also among the cognitive functions of this area

    View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades

    Full text link
    The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio
    • …
    corecore