2,185 research outputs found

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades

    Full text link
    The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)

    Target Selection by Frontal Cortex During Coordinated Saccadic and Smooth Pursuit Eye Movement

    Full text link
    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Encoding of Intention and Spatial Location in the Posterior Parietal Cortex

    Get PDF
    The posterior parietal cortex is functionally situated between sensory cortex and motor cortex. The responses of cells in this area are difficult to classify as strictly sensory or motor, since many have both sensory- and movement-related activities, as well as activities related to higher cognitive functions such as attention and intention. In this review we will provide evidence that the posterior parietal cortex is an interface between sensory and motor structures and performs various functions important for sensory-motor integration. The review will focus on two specific sensory-motor tasks-the formation of motor plans and the abstract representation of space. Cells in the lateral intraparietal area, a subdivision of the parietal cortex, have activity related to eye movements the animal intends to make. This finding represents the lowest stage in the sensory-motor cortical pathway in which activity related to intention has been found and may represent the cortical stage in which sensory signals go "over the hump" to become intentions and plans to make movements. The second part of the review will discuss the representation of space in the posterior parietal cortex. Encoding spatial locations is an essential step in sensory-motor transformations. Since movements are made to locations in space, these locations should be coded invariant of eye and head position or the sensory modality signaling the target for a movement Data will be reviewed demonstrating that there exists in the posterior parietal cortex an abstract representation of space that is constructed from the integration of visual, auditory, vestibular, eye position, and propriocaptive head position signals. This representation is in the form of a population code and the above signals are not combined in a haphazard fashion. Rather, they are brought together using a specific operation to form "planar gain fields" that are the common foundation of the population code for the neural construct of space

    Human Amygdala in Sensory and Attentional Unawareness: Neural Pathways and Behavioural Outcomes

    Get PDF
    One of the neural structures more often implicated in the processing of emotional signals in the absence of visual awareness is the amygdala. In this chapter, we review current evidence from human neuroscience in healthy and brain-damaged patients on the role of amygdala during non-conscious (visual) perception of emotional stimuli. Nevertheless, there is as of yet no consensus on the limits and conditions that affect the extent of amygdala’s response without focused attention or awareness. We propose to distinguish between attentional unawareness, a condition wherein the stimulus is potentially accessible to enter visual awareness but fails to do so because attention is diverted, and sensory unawareness, in which the stimulus fails to enter awareness because its normal processing in the visual cortex is suppressed. Within this conceptual framework, some of the apparently contradictory findings seem to gain new coherence and converge on the role of the amygdala in supporting different types of non-conscious emotion processing. Amygdala responses in the absence of awareness are linked to different functional mechanisms and are driven by more complex neural networks than commonly assumed. Acknowledging this complexity can be helpful to foster new studies on amygdala functions without awareness and their impact on human behaviour

    Effect of Allocentric Landmarks on Primate Gaze Behaviour in a Cue Conflict Task

    Get PDF
    The brain can remember the location of a peripheral target relative to self (egocentric) or to an external landmark (allocentric). The relative reliabilities of egocentric and allocentric coding had been examined in reach, but it was never explored in the gaze control system. In this study, we utilized a cue conflict task to create a dissociation between egocentric and allocentric information to assess the effect of allocentric cues on gaze behaviour in two macaque monkeys. The results showed that the monkey gaze behaviour is a combination of both reference frames depending on the reliability of the allocentric cue. We also found that the allocentric cue was significantly more reliable when it is located closer to the fixation point, and when the cue shifts further away from the fixation point or the original target. Our findings suggest that the influence of allocentric cues on gaze behaviour depends on various gaze parameters

    Spatial Transformations in Frontal Cortex During Memory-Guided Head-Unrestrained Gaze Shifts

    Get PDF
    We constantly orient our line of sight (i.e., gaze) to external objects in our environment. One of the central questions in sensorimotor neuroscience concerns how visual input (registered on retina) is transformed into appropriate signals that drive gaze shift, comprised of coordinated movement of the eyes and the head. In this dissertation I investigated the function of a node in the frontal cortex, known as the frontal eye field (FEF) by investigating the spatial transformations that occur within this structure. FEF is implicated as a key node in gaze control and part of the working memory network. I recorded the activity of single FEF neurons in head-unrestrained monkeys as they performed a simple memory-guided gaze task which required delayed gaze shifts (by a few hundred milliseconds) towards remembered visual stimuli. By utilizing an elaborate analysis method which fits spatial models to neuronal response fields, I identified the spatial code embedded in neuronal activity related to vision (visual response), memory (delay response), and gaze shift (movement response). First (Chapter 2), spatial transformations that occur within the FEF were identified by comparing spatial codes in visual and movement responses. I showed eye-centered dominance in both neuronal responses (and excluded head- and space-centered coding); however, whereas the visual response encoded target position, the movement response encoded the position of the imminent gaze shift (and not its independent eye and head components), and this was observed even within single neurons. In Chapter 3, I characterized the time-course for this target-to-gaze transition by identifying the spatial code during the intervening delay period. The results from this study highlighted two major transitions within the FEF: a gradual transition during the visual-delay-movement extent of delay-responsive neurons, followed by a discrete transition between delay-responsive neurons and pre-saccadic neurons that exclusively fire around the time of gaze movement. These results show that the FEF is involved in memory-based transformations in gaze control; but instead of encoding specific movement parameters (eye and head) it encodes the desired gaze endpoint. The representations of the movement goal are subject to noise and this noise accumulates at different stages related to different mechanisms

    Neural Mechanism of Blindsight in a Macaque Model

    Get PDF
    Some patients with damage to the primary visual cortex (V1) exhibit visuomotor ability, despite loss of visual awareness, a phenomenon termed “blindsight”. We review a series of studies conducted mainly in our laboratory on macaque monkeys with unilateral V1 lesioning to reveal the neural pathways underlying visuomotor transformation and the cognitive capabilities retained in blindsight. After lesioning, it takes several weeks for the recovery of visually guided saccades toward the lesion-affected visual field. In addition to the lateral geniculate nucleus, the pathway from the superior colliculus to the pulvinar participates in visuomotor processing in blindsight. At the cortical level, bilateral lateral intraparietal regions become critically involved in the saccade control. These results suggest that the visual circuits experience drastic changes while the monkey acquires blindsight. In these animals, analysis based on signal detection theory adapted to behavior in the “Yes–No” task indicates reduced sensitivity to visual targets, suggesting that visual awareness is impaired. Saccades become less accurate, decisions become less deliberate, and some forms of bottom-up attention are impaired. However, a variety of cognitive functions are retained such as saliency detection during free viewing, top–down attention, short-term spatial memory, and associative learning. These observations indicate that blindsight is not a low-level sensory-motor response, but the residual visual inputs can access these cognitive capabilities. Based on these results we suggest that the macaque model of blindsight replicates type II blindsight patients who experience some “feeling” of objects, which guides cognitive capabilities that we naïvely think are not possible without phenomenal consciousness

    Intentional Maps in Posterior Parietal Cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    Amygdala response to emotional stimuli without awareness: Facts and interpretations

    Get PDF
    Over the past two decades, evidence has accumulated that the human amygdala exerts some of its functions also when the observer is not aware of the content, or even presence, of the triggering emotional stimulus. Nevertheless, there is as of yet no consensus on the limits and conditions that affect the extent of amygdala\u2019s response without focused attention or awareness. Here we review past and recent studies on this subject, examining neuroimaging literature on healthy participants as well as brain damaged patients, and we comment on their strengths and limits. We propose a theoretical distinction between processes involved in attentional unawareness, wherein the stimulus is potentially accessible to enter visual awareness but fails to do so because attention is diverted, and in sensory unawareness, wherein the stimulus fails to enter awareness because its normal processing in the visual cortex is suppressed. We argue this distinction, along with data sampling amygdala responses with high temporal resolution, helps to appreciate the multiplicity of functional and anatomical mechanisms centered on the amygdala and supporting its role in non-conscious emotion processing. Separate, but interacting, networks relay visual information to the amygdala exploiting different computational properties of subcortical and cortical routes, thereby supporting amygdala functions at different stages of emotion processing. This view reconciles some apparent contradictions in the literature, as well as seemingly contrasting proposals, such as the dual stage and the dual route model. We conclude that evidence in favor of the amygdala response without awareness is solid, albeit this response originates from different functional mechanisms and is driven by more complex neural networks than commonly assumed. Acknowledging the complexity of such mechanisms can foster new insights on the varieties of amygdala functions without awareness and their impact on human behavior
    corecore