189 research outputs found

    Saccades to a remembered location elicit spatially specific activation in human retinotopic visual cortex

    Full text link
    The possible impact upon human visual cortex from saccades to remembered target locations was investigated using functional magnetic resonance imaging (fMRI). A specific location in the upper-right or upper-left visual quadrant served as the saccadic target. After a delay of 2,400 msec, an auditory signal indicated whether to execute a saccade to that location (go trial) or to cancel the saccade and remain centrally fixated (no-go). Group fMRI analysis revealed activation specific to the remembered target location for executed saccades, in the contralateral lingual gyrus. No-go trials produced similar, albeit significantly reduced, effects. Individual retinotopic mapping confirmed that on go trials, quadrant-specific activations arose in those parts of ventral V1, V2, and V3 that coded the target location for the saccade, whereas on no-go trials, only the corresponding parts of V2 and V3 were significantly activated. These results indicate that a spatial-motor saccadic task (i.e., making an eye movement to a remembered location) is sufficient to activate retinotopic visual cortex spatially corresponding to the target location, and that this activation is also present (though reduced) when no saccade is executed. We discuss the implications of finding that saccades to remembered locations can affect early visual cortex, not just those structures conventionally associated with eye movements, in relation to recent ideas about attention, spatial working memory, and the notion that recently activated representations can be "refreshed" when needed

    Spatial Updating in Human Cortex

    Get PDF
    Single neurons in several cortical areas in monkeys update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. The central hypothesis here is that spatial updating also occurs in humans and that it can be visualized with functional MRI.In Chapter 2, we describe experiments in which we tested the role of human parietal cortex in spatial updating. We scanned subjects during a task that involved remapping of visual signals across hemifields. This task is directly analogous to the single-step saccade task used to test spatial updating in monkeys. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. Our results demonstrate that updating of visual information occurs in human parietal cortex and can be visualized with fMRI.The experiments in Chapter 2 show that updated visual responses have a characteristic latency and response shape. Chapter 3 describes a statistical model for estimating these parameters. The method is based on a nonlinear, fully Bayesian, hierarchical model that decomposes the fMRI time series data into baseline, smooth drift, activation signal, and noise. This chapter shows that this model performs well relative to commonly-used general linear models. In Chapter 4, we use the statistical method described in Chapter 3 to test for the presence of spatial updating activity in human extrastriate visual cortex. We identified the borders of several retinotopically defined visual areas in the occipital lobe. We then tested for spatial updating using the single step saccade task. We found a roughly monotonic relationship between the strength of updating activity and position in the visual area hierarchy. We observed the strongest responses in area V4, and the weakest response in V1. We conclude that updating is not restricted to brain regions involved primarily in attention and the generation of eye movements, but rather, is present in occipital lobe visual areas as well

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Multiple Reference Frames for Saccadic Planning in the Human Parietal Cortex

    Full text link

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Updating spatial working memory in a dynamic visual environment

    Get PDF
    The present review describes recent developments regarding the role of the eye movement system in representing spatial information and keeping track of locations of relevant objects. First, we discuss the active vision perspective and why eye movements are considered crucial for perception and attention. The second part focuses on the question of how the oculomotor system is used to represent spatial attentional priority, and the role of the oculomotor system in maintenance of this spatial information. Lastly, we discuss recent findings demonstrating rapid updating of information across saccadic eye movements. We argue that the eye movement system plays a key role in maintaining and rapidly updating spatial information. Furthermore, we suggest that rapid updating emerges primarily to make sure actions are minimally affected by intervening eye movements, allowing us to efficiently interact with the world around us

    Mechanisms of top-down visual spatial attention: computational and behavioral investigations

    Get PDF
    This thesis examines the mechanisms underlying visual spatial attention. In particular I focused on top-­‐down or voluntary attention, namely the ability to select relevant information and discard the irrelevant according to our goals. Given the limited processing resources of the human brain, which does not allow to process all the available information to the same degree, the ability to correctly allocate processing resources is fundamental for the accomplishment of most everyday tasks. The cost of misoriented attention is that we could miss some relevant information, with potentially serious consequences. In the first study (chapter 2) I will address the issue of the neural substrates of visual spatial attention: what are the neural mechanisms that allow the deployment of visual spatial attention? According to the premotor theory orienting attention to a location in space is equivalent to planning an eye movement to the same location, an idea strongly supported by neuroimaging and neurophysiological evidence. Accordingly, in this study I will present a model that can account for several attentional effects without requiring additional mechanisms separate from the circuits that perform sensorimotor transformations for eye movements. Moreover, it includes a mechanism that allows, within the framework of the premotor theory, to explain dissociations between attention and eye movements that may be invoked to disprove it. In the second model presented (chapter 3) I will further investigate the computational mechanisms underlying sensorimotor transformations. Specifically I will show that a representation in which the amplitude of visual responses is modulated by postural signal is both efficient and plausible, emerging also in a neural network model trained through unsupervised learning (i.e., using only signals locally available at the neuron level). Ultimately this result gives additional support to the approach adopted in the first model. Next, I will present a series of behavioral studies: in the first (chapter 4) I will show that spatial constancy of attention (i.e., the ability to sustain attention at a spatial location across eye movements) is dependent on some properties of the image, namely the presence of continuous visual landmarks at the attended locations. Importantly, this finding helps resolve contrasts between several recent results. In the second behavioral study (chapter 5), I will investigate an often neglected aspect of spatial cueing paradigms, probably the most widely used technique in studies of covert attention: the role of cue predictivity (i.e. the extent to which the spatial cue correctly indicates the location where the target stimulus will appear). Results show that, independently of participant’s awareness, changes  in predictivity result in changes in spatial validity effects, and that reliable shifts of attention can take place also in the absence of a predictive cue. In sum the results question the appropriateness of using predictive cues for delineating pure voluntary shifts of spatial attention. Finally, in the last study I will use a psychophysiological measure, the diameter of the eye’s pupil, to investigate intensive aspects of attention. Event-­‐related pupil dilations accurately mirrored changes in visuospatial awareness induced by a dual-­‐task manipulation that consumed attentional resources. Moreover, results of the primary spatial monitoring task revealed a significant rightward bias, indicated by a greater proportion of missed targets in the left hemifield. Interestingly this result mimics the extinction to double simultaneous stimulation (i.e., the failure to respond to a stimulus when it is presented simultaneously with another stimulus) which is often found in patients with unilateral brain damage. Overall, these studies present an emerging picture of attention as a complex mechanism that even in its volitional aspects is modulated by other non-­‐volitional factors, both external and internal to the individua

    Diagnostic information use to understand brain mechanisms of facial expression categorization

    Get PDF
    Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1. In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes. In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing. The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization

    Examining the representation of spatial short-term memories through the lens of resource allocation theory

    Get PDF
    This thesis aims to examine the nature of spatial representations in visuospatial working memory (VSWM) and the mechanism by which the oculomotor system supports VSWM maintenance. To examine these research questions, Chapter Two verifies the use of a continuous report task in measuring memory for spatial locations, showing that the representation of spatial locations is affected by the number of to-be-remembered items. In Chapter Three, a strong eccentricity effect in spatial, but not colour, working memory was observed. This result is argued to reflect that the resource involved in spatial working memory relies on topographic mapping. Chapter Four examined the distribution of resources across sequences of spatial locations. Results showed that the serial position effect, and therefore the distribution of resources, depends on whether the full sequence or a single probe is to be recalled. To examine the role of the oculomotor system, saccadic interference in spatial and colour working memory was examined in Chapter Five. Results showed that the oculomotor system is selectively involved in maintenance of spatial locations in VSWM. Performing multiple delay-period saccades resulted in an increase in guessing, but not imprecision, in spatial working memory. It is argued that spatial locations in VSWM are represented as activity peaks in a topographic cortical map. Within this map, the oculomotor system is involved in maintaining the signal to noise ratio of activity peaks for each of the to-be-remembered locations. This research makes an important and novel contribution to the literature by advancing understanding of the nature of representations within spatial working memory and interactions between VSWM and action systems
    corecore