14,821 research outputs found

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    Multimodal Representation of Space in the Posterior Parietal Cortex and its use in Planning Movements

    Get PDF
    Recent experiments are reviewed that indicate that sensory signals from many modalities, as well as efference copy signals from motor structures, converge in the posterior parietal cortex in order to code the spatial locations of goals for movement. These signals are combined using a specific gain mechanism that enables the different coordinate frames of the various input signals to be combined into common, distributed spatial representations. These distributed representations can be used to convert the sensory locations of stimuli into the appropriate motor coordinates required for making directed movements. Within these spatial representations of the posterior parietal cortex are neural activities related to higher cognitive functions, including attention. We review recent studies showing that the encoding of intentions to make movements is also among the cognitive functions of this area

    Eye movements in the wild : Oculomotor control, gaze behavior & frames of reference

    Get PDF
    Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.Peer reviewe

    Visual perceptual stability and the processing of self-motion information: neurophysiology, psychophysics and neuropsychology

    Get PDF
    While we move through our environment, we constantly have to deal with new sensory input. Especially the visual system has to deal with an ever-changing input signal, since we continuously move our eyes. For example, we change our direction of gaze about three times every second to a new area within our visual field with a fast, ballistic eye movement called a saccade. As a consequence, the entire projection of the surrounding world on our retina moves. Yet, we do not perceive this shift consciously. Instead, we have the impression of a stable world around us, in which objects have a well-defined location. In my thesis I aimed to investigate the underlying neural mechanisms of the visual perceptual stability of our environment. One hypothesis is that there is a coordinate transformation of the retinocentric input signal to a craniocentric (egocentric) and eventually even to a world centered (allocentric) frame of reference. Such a transformation into a craniocentric reference frame requires information about both the location of a stimulus on the retina and the current eye position within the head. The physicist Hermann von Helmholtz was one of the first who suggested that such an eye-position signal is available in the brain as an internal copy of the motor plan, which is sent to the eye muscles. This so-called efference copy allows the brain to classify actions as self-generated and differentiate them from being externally triggered. If we are the creator of an action, we are able to predict its outcome and can take this prediction into consideration for the further processing. For example, if the projection of the environment moves across the retina due to an eye movement, the shift is registered as self-induced and the brain maintains a stable percept of the world. However, if one gently pushes the eye from the side with a finger, we perceive a moving environment. Along the same lines, it is necessary to correctly attribute the movement of the visual field to our own self-motion, e.g. to perform eye movements accounting for the additional influences of our movements. The first study of my thesis shows that the perceived location of a stimulus might indeed be a combination of two independent neuronal signals, i.e. the position of the stimulus on the retina and information about the current eye-position or eye-movement, respectively. In this experiment, the mislocalization of briefly presented stimuli, which is characteristic for each type of eye-movement, leads to a perceptual localization of stimuli within the area of the blind spot on the retina. Yet, this is the region where the optic nerve leaves the eye, meaning that there are no photoreceptors available to convert light into neuronal signals. Physically, subjects should be blind for stimuli presented in this part of the visual field. In fact, a combination of the actual stimulus position with the specific, error-inducing eye-movement information is able to explain the experimentally measured behavior. The second study in my thesis investigates the underlying neural mechanism of the mislocalization of briefly presented stimuli during eye-movements. Many previous studies using animal models (the rhesus monkey) revealed internal representations of eye-position signals in various brain regions and therefore confirmed the hypothesis of an efference copy signal within the brain. Although these eye-position signals basically reflect the actual eye-position with good accuracy, there are also some spatial and temporal inaccuracies. These erroneous representations have been previously suggested as the source of perceptual mislocalization during saccades. The second study of my thesis extends this hypothesis to the mislocalization during smooth pursuit eye-movements. We usually perform such an eye movement when we want to continuously track a moving object with our eyes. I showed that the activity of neurons in the ventral intraparietal area of the rhesus monkey adequately represents the actual eye-position during smooth pursuit. However, there was a constant lead of the internal eye-position signal as compared to the real eye-position in direction of the ongoing eye-movement. In combination with a distortion of the visual map due to an uneven allocation of attention in direction of the future stimulus position, this results in a mislocalization pattern during smooth pursuit, which almost exactly resembles those typically measured in psychophysical experiments. Hence, on the one hand the efference copy of the eye-position signal provides the required signal to perform a coordinate transformation in order to preserve a stable perception of our environment. On the other hand small inaccuracies within this signal seem to cause perceptual errors when the visual system is experimentally pushed to its limits. The efference copy also plays a role in dysfunctions of the brain in neurological or psychiatric diseases. For example, many symptoms of schizophrenia patients could be explained by an impaired efference copy mechanism and a resulting misattribution of agency to self- and externally-produced actions. Following this hypothesis, the typically observed auditory hallucinations in these patients might be the result of an erroneously assigned agency of their own thoughts. To make a detailed analysis of this potentially impaired efference copy mechanism possible, the third study of my thesis investigated eye movements of schizophrenia patients and tried to step outside the limited capabilities of laboratory setups into the real world. This study showed that results of previous laboratory studies only partly resemble those obtained in the real world. For example, schizophrenia patients, when compared to healthy controls, usually show a more inaccurate smooth pursuit eye-movement in the laboratory. Yet, in the real world when they track a stationary object with their eyes while they are moving towards it, there are no differences between patients and healthy controls, although both types of eye-movements are closely related. This might be due to the fact that patients were able to use additional sources of information in the real world, e.g. self-motion information, to compensate for some of their deficits under certain conditions. Similarly, the fourth study of my thesis showed that typical impairments of eye-movements during healthy aging can be equalized by other sources of information available under natural conditions. At the same time, this work underlined the need of eye-movement measurements in the real world as a complement to laboratory studies to accurately describe the visual system, all mechanisms of perception and their interactions under natural circumstances. For example, experiments in the laboratory usually analyze particularly selected eye-movement parameters within a specific range, such as saccades of a certain amplitude. However, this does not reflect everyday life in which parameters like that are typically continuous and not normally distributed. Furthermore, motion-selective areas in the brain might play a much bigger role in natural environments, since we generally move our head and/or ourselves. To correctly analyze the contribution to and influences on eye-movements, one has to perform eye-movement studies under conditions as realistic as possible. The fifth study of my thesis aimed to investigate a possible application of eye-movement studies in the diagnosis of neuronal diseases. We showed that basic eye-movement parameters like saccade peak-velocity can be used to differentiate patients with Parkinson’s disease from patients with an atypical form of Parkinsonism, progressive supranuclear palsy. This differentiation is of particular importance since both diseases share a similar onset but have a considerably different progression and outcome, requiring different types of therapies. An early differential diagnosis, preferably in a subclinical stage, is needed to ensure the optimal treatment of the patients in order to ease the symptoms and eventually even improve the prognosis. The study showed that mobile eye-trackers are particularly well-suited to investigate eye movements in the daily clinical routine, due to their promising results in differential diagnosis and their easy, fast and reliable handling. In conclusion, my thesis underlines the importance of an interaction of all the different neuroscientific methods such as psychophysics, eye-movement measurements in the real world, electrophysiology and the investigation of neuropsychiatric patients to get a complete picture of how the brain works. The results of my thesis contribute to extent the current knowledge about the processing of information and the perception of our environment in the brain, point towards fields of application of eye-movement measurements and can be used as groundwork for future research

    The reference frame for encoding and retention of motion depends on stimulus set size

    Get PDF
    YesThe goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information

    An fMRI study of parietal cortex involvement in the visual guidance of locomotion

    Get PDF
    Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the “far road” and “near road” mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants were required to gauge their current direction of travel (rather than directly control it). During forward egomotion, the distant road edges provided future path information, which participants used to improve their heading judgments. During backward egomotion, the road edges did not enhance performance because they no longer provided prospective information. This behavioral dissociation was reflected at the neural level, where only simulated forward travel increased activation in a region of the superior parietal lobe and the medial intraparietal sulcus. Providing only near road information during a forward heading judgment task resulted in activation in the motion complex. We propose a complementary role for the posterior parietal cortex and motion complex in detecting future path information and maintaining current lane positioning, respectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved

    Motion tracking of iris features to detect small eye movements

    Get PDF
    The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Visual motion processing and human tracking behavior

    Full text link
    The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the object's image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the object's global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms

    Joint Representation of Translational and Rotational Components of Self-Motion in the Parietal Cortex

    Get PDF
    Navigating through the world involves processing complex visual inputs to extract information about self-motion relative to one\u27s surroundings. When translations (T) and rotations (R) are present together, the velocity patterns projected onto the retina (optic flow) are a combination of the two. Since navigational tasks can be extremely varied, such as deciphering heading or tracking moving prey or estimating one\u27s motion trajectory, it is imperative that the visual system represent both the T and R components. Despite the importance of such joint representations, most previous studies have only focused on the representation of translations. Moreover, these studies emphasized the role of extra-retinal cues (efference copies of self-generated rotations) rather than visual cues for decomposing the optic flow. We recorded single units in the macaque ventral intraparietal area (VIP) to understand the role of visual cues in decomposing optic flow and jointly representing both the T and R components. Through the following studies, we establish that the visual system can rely on purely visual cues to derive the translational and rotational components of self-motion. We also show for the first time, joint representation of T and R at the level of single neurons
    • …
    corecore