865 research outputs found

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Neural Dynamics of Saccadic and Smooth Pursuit Eye Movement Coordination during Visual Tracking of Unpredictably Moving Targets

    Full text link
    How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Target Selection by Frontal Cortex During Coordinated Saccadic and Smooth Pursuit Eye Movement

    Full text link
    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A competitive integration model of exogenous and endogenous eye movements

    Get PDF
    We present a model of the eye movement system in which the programming of an eye movement is the result of the competitive integration of information in the superior colliculi (SC). This brain area receives input from occipital cortex, the frontal eye fields, and the dorsolateral prefrontal cortex, on the basis of which it computes the location of the next saccadic target. Two critical assumptions in the model are that cortical inputs are not only excitatory, but can also inhibit saccades to specific locations, and that the SC continue to influence the trajectory of a saccade while it is being executed. With these assumptions, we account for many neurophysiological and behavioral findings from eye movement research. Interactions within the saccade map are shown to account for effects of distractors on saccadic reaction time (SRT) and saccade trajectory, including the global effect and oculomotor capture. In addition, the model accounts for express saccades, the gap effect, saccadic reaction times for antisaccades, and recorded responses from neurons in the SC and frontal eye fields in these tasks. © The Author(s) 2010

    Sensorimotor maps can be dynamically calibrated using an adaptive-filter model of the cerebellum

    Get PDF
    Substantial experimental evidence suggests the cerebellum is involved in calibrating sensorimotor maps. Consistent with this involvement is the well-known, but little understood, massive cerebellar projection to maps in the superior colliculus. Map calibration would be a significant new role for the cerebellum given the ubiquity of map representations in the brain, but how it could perform such a task is unclear. Here we investigated a dynamic method for map calibration, based on electrophysiological recordings from the superior colliculus, that used a standard adaptive-filter cerebellar model. The method proved effective for complex distortions of both unimodal and bimodal maps, and also for predictive map-based tracking of moving targets. These results provide the first computational evidence for a novel role for the cerebellum in dynamic sensorimotor map calibration, of potential importance for coordinate alignment during ongoing motor control, and for map calibration in future biomimetic systems. This computational evidence also provides testable experimental predictions concerning the role of the connections between cerebellum and superior colliculus in previously observed dynamic coordinate transformations

    Learning the Optimal Control of Coordinated Eye and Head Movements

    Get PDF
    Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements

    Integrating Brain and Biomechanical Models—A New Paradigm for Understanding Neuro-muscular Control

    Get PDF
    To date, realistic models of how the central nervous system governs behavior have been restricted in scope to the brain, brainstem or spinal column, as if these existed as disembodied organs. Further, the model is often exercised in relation to an in vivo physiological experiment with input comprising an impulse, a periodic signal or constant activation, and output as a pattern of neural activity in one or more neural populations. Any link to behavior is inferred only indirectly via these activity patterns. We argue that to discover the principles of operation of neural systems, it is necessary to express their behavior in terms of physical movements of a realistic motor system, and to supply inputs that mimic sensory experience. To do this with confidence, we must connect our brain models to neuro-muscular models and provide relevant visual and proprioceptive feedback signals, thereby closing the loop of the simulation. This paper describes an effort to develop just such an integrated brain and biomechanical system using a number of pre-existing models. It describes a model of the saccadic oculomotor system incorporating a neuromuscular model of the eye and its six extraocular muscles. The position of the eye determines how illumination of a retinotopic input population projects information about the location of a saccade target into the system. A pre-existing saccadic burst generator model was incorporated into the system, which generated motoneuron activity patterns suitable for driving the biomechanical eye. The model was demonstrated to make accurate saccades to a target luminance under a set of environmental constraints. Challenges encountered in the development of this model showed the importance of this integrated modeling approach. Thus, we exposed shortcomings in individual model components which were only apparent when these were supplied with the more plausible inputs available in a closed loop design. Consequently we were able to suggest missing functionality which the system would require to reproduce more realistic behavior. The construction of such closed-loop animal models constitutes a new paradigm of computational neurobehavior and promises a more thoroughgoing approach to our understanding of the brain’s function as a controller for movement and behavior

    Theoretical and empirical investigation of the [tau]-coupling theory

    Get PDF

    Subcortical Control of Visual Fixation

    Get PDF
    corecore