56 research outputs found

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Target Selection by Frontal Cortex During Coordinated Saccadic and Smooth Pursuit Eye Movement

    Full text link
    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Visual motion processing and human tracking behavior

    Full text link
    The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the object's image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the object's global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms

    Comparison of the precision of smooth pursuit in humans and head unrestrained monkeys

    Get PDF
    Direct comparison of results of humans and monkeys is often complicated by differences in experimental conditions. We replicated in head unrestrained macaques experiments of a recent study comparing human directional precision during smooth pursuit eye movements (SPEM) and saccades to moving targets (Braun & Gegenfurtner, 2016). Directional precision of human SPEM follows an exponential decay function reaching optimal values of 1.5°-3° within 300 ms after target motion onset, whereas precision of initial saccades to moving targets is slightly better. As in humans, we found general agreement in the development of directional precision of SPEM over time and in the differences between directional precision of initial saccades and SPEM initiation. However, monkeys showed overall lower precision in SPEM compared to humans. This was most likely due to differences in experimental conditions, such as in the stabilization of the head, which was by a chin and a head rest in human subjects and unrestrained in monkeys.

    Asymmetric saccade reaction times to smooth pursuit

    Get PDF
    Before initiating a saccade to a moving target, the brain must take into account the target’s eccentricity as well as its movement direction and speed. We tested how the kinematic characteristics of the target influence the time course of this oculomotor response. Participants performed a step-ramp task in which the target object stepped from a central to an eccentric position and moved at constant velocity either to the fixation position (foveopetal) or further to the periphery (foveofugal). The step size and target speed were varied. Of particular interest were trials that exhibited an initial saccade prior to a smooth pursuit eye movement. Measured saccade reaction times were longer in the foveopetal than in the foveofugal condition. In the foveopetal (but not the foveofugal) condition, the occurrence of an initial saccade, its reaction time as well as the strength of the pre-saccadic pursuit response depended on both the target’s speed and the step size. A common explanation for these results may be found in the neural mechanisms that select between oculomotor response alternatives, i.e., a saccadic or smooth response

    Eye movements in the wild : Oculomotor control, gaze behavior & frames of reference

    Get PDF
    Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.Peer reviewe

    Gaze-orientation during transient occlusion

    Get PDF
    Fast moving objects are often transiently occluded in our normal surrounds as they pass behind other surfaces and objects. The consequent loss of drive from visual feedback can be compensated by extra-retinal input. Evidence from behavioural studies indicates that gaze orientation during transient occlusion is not simply a reflexive response but can also be predictive of the upcoming motion. Indeed, while smooth pursuit eye velocity often falls below target velocity during a transient occlusion, it increases prior to object reappearance in order to reduce retinal slip. Moreover, the smooth response is combined with saccadic eye movements that match eye displacement to object displacement and thus minimize position error at object reappearance. Comparisons of conditions that require fixation or pursuit suggest that the maintenance of gaze orientation during transient occlusion is the habitual response and can facilitate both spatial and temporal estimation. Interconnected areas of the frontal and parietal cortex have been shown to be active during pursuit of occluded object motion and are thus thought to be involved in the control of gaze-orientation as well as representing object motion. Future work should determine whether expertise in sport mediates oculomotor control and thereby perception of relevant information to support expert performance

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852
    • …
    corecore