241 research outputs found

    A competitive integration model of exogenous and endogenous eye movements

    Get PDF
    We present a model of the eye movement system in which the programming of an eye movement is the result of the competitive integration of information in the superior colliculi (SC). This brain area receives input from occipital cortex, the frontal eye fields, and the dorsolateral prefrontal cortex, on the basis of which it computes the location of the next saccadic target. Two critical assumptions in the model are that cortical inputs are not only excitatory, but can also inhibit saccades to specific locations, and that the SC continue to influence the trajectory of a saccade while it is being executed. With these assumptions, we account for many neurophysiological and behavioral findings from eye movement research. Interactions within the saccade map are shown to account for effects of distractors on saccadic reaction time (SRT) and saccade trajectory, including the global effect and oculomotor capture. In addition, the model accounts for express saccades, the gap effect, saccadic reaction times for antisaccades, and recorded responses from neurons in the SC and frontal eye fields in these tasks. © The Author(s) 2010

    Humans Use Predictive Gaze Strategies to Target Waypoints for Steering

    Get PDF
    A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.Peer reviewe

    Linear ensemble-coding in midbrain superior colliculus specifies the saccade kinematics

    Get PDF
    Recently, we proposed an ensemble-coding scheme of the midbrain superior colliculus (SC) in which, during a saccade, each spike emitted by each recruited SC neuron contributes a fixed minivector to the gaze-control motor output. The size and direction of this ‘spike vector’ depend exclusively on a cell’s location within the SC motor map (Goossens and Van Opstal, in J Neurophysiol 95: 2326–2341, 2006). According to this simple scheme, the planned saccade trajectory results from instantaneous linear summation of all spike vectors across the motor map. In our simulations with this model, the brainstem saccade generator was simplified by a linear feedback system, rendering the total model (which has only three free parameters) essentially linear. Interestingly, when this scheme was applied to actually recorded spike trains from 139 saccade-related SC neurons, measured during thousands of eye movements to single visual targets, straight saccades resulted with the correct velocity profiles and nonlinear kinematic relations (‘main sequence properties– and ‘component stretching’) Hence, we concluded that the kinematic nonlinearity of saccades resides in the spatial-temporal distribution of SC activity, rather than in the brainstem burst generator. The latter is generally assumed in models of the saccadic system. Here we analyze how this behaviour might emerge from this simple scheme. In addition, we will show new experimental evidence in support of the proposed mechanism

    Learning the Optimal Control of Coordinated Eye and Head Movements

    Get PDF
    Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements

    System Level Assessment of Motor Control through Patterned Microstimulation in the Superior Colliculus

    Get PDF
    We are immersed in an environment full of sensory information, and without much thought or effort we can produce orienting responses to appropriately react to different stimuli. This seemingly simple and reflexive behavior is accomplished by a very complicated set of neural operations, in which motor systems in the brain must control behavior based on populations of sensory information. The oculomotor or saccadic system is particularly well studied in this regard. Within a visual environment consisting of many potential stimuli, we control our gaze with rapid eye movements, or saccades, in order to foveate visual targets of interest. A key sub-cortical structure involved in this process is the superior colliculus (SC). The SC is a structure in the midbrain which receives visual input and in turn projects to lower-level areas in the brainstem that produce saccades. Interestingly, microstimulation of the SC produces eye movements that match the metrics and kinematics of naturally-evoked saccades. As a result, we explore the role of the SC in saccadic motor control by manually introducing distributions of activity through neural stimulation. Systematic manipulation of microstimulation patterns were used to characterize how ensemble activity in the SC is decoded to generate eye movements. Specifically, we focused on three different facets of saccadic motor control. In the first study, we examine the effective influence of microstimulation parameters on behavior to reveal characteristics of the neural mechanisms underlying saccade generation. In the second study, we experimentally verify the predictions of computational algorithms that are used to describe neural mechanisms for saccade generation. And in the third study, we assess where neural mechanisms for decoding occur within the oculomotor network in order to establish the order of operations necessary for saccade generation. The experiments assess different aspects of saccadic motor control, which collectively, reveal properties and mechanisms that contribute to the comprehensive understanding of signal processing in the oculomotor system

    CRISP: a computational model of fixation durations in scene viewing

    Get PDF
    Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations. Here, we propose a computational model (CRISP) that accounts for saccade timing and programming and thus for variations in fixation durations in scene viewing. First, timing signals are modeled as continuous-time random walks. Second, difficulties at the level of visual and cognitive processing can inhibit and thus modulate saccade timing. Inhibition generates moment-by-moment changes in the random walk’s transition rate and processing-related saccade cancellation. Third, saccade programming is completed in 2 stages: an initial, labile stage that is subject to cancellation and a subsequent, nonlabile stage. Several simulation studies tested the model’s adequacy and generality. An initial simulation study explored the role of cognitive factors in scene viewing by examining how fixation durations differed under different viewing task instructions. Additional simulations investigated the degree to which fixation durations were under direct moment-to-moment control of the current visual scene. The present work further supports the conclusion that fixation durations, to a certain degree, reflect perceptual and cognitive activity in scene viewing. Computational model simulations contribute to an understanding of the underlying processes of gaze control

    Humans Use Predictive Gaze Strategies to Target Waypoints for Steering

    Get PDF
    major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours. Peer reviewed Document type: Articl
    corecore