18 research outputs found

    Linear ensemble-coding in midbrain superior colliculus specifies the saccade kinematics

    Get PDF
    Recently, we proposed an ensemble-coding scheme of the midbrain superior colliculus (SC) in which, during a saccade, each spike emitted by each recruited SC neuron contributes a fixed minivector to the gaze-control motor output. The size and direction of this ‘spike vector’ depend exclusively on a cell’s location within the SC motor map (Goossens and Van Opstal, in J Neurophysiol 95: 2326–2341, 2006). According to this simple scheme, the planned saccade trajectory results from instantaneous linear summation of all spike vectors across the motor map. In our simulations with this model, the brainstem saccade generator was simplified by a linear feedback system, rendering the total model (which has only three free parameters) essentially linear. Interestingly, when this scheme was applied to actually recorded spike trains from 139 saccade-related SC neurons, measured during thousands of eye movements to single visual targets, straight saccades resulted with the correct velocity profiles and nonlinear kinematic relations (‘main sequence properties– and ‘component stretching’) Hence, we concluded that the kinematic nonlinearity of saccades resides in the spatial-temporal distribution of SC activity, rather than in the brainstem burst generator. The latter is generally assumed in models of the saccadic system. Here we analyze how this behaviour might emerge from this simple scheme. In addition, we will show new experimental evidence in support of the proposed mechanism

    Learning the Optimal Control of Coordinated Eye and Head Movements

    Get PDF
    Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements

    Influence of head position on the spatial representation of acoustic targets

    No full text
    possibility, we exploited the unique property of the auditory system that sound elevation is extracted independently from pinna-related spectral cues. In the absence of such cues, accurate elevation detection is not possible, even when head movements are made. This is shown in a second experiment where pure tones were localized at a fixed elevation that depended on the tone frequency rather than on the actual target elevation, both under head-fixed and-free conditions. To test, in a third experiment, whether the perceived elevation of tones relies on a head- or spacefixed target representation, eye movements were elicited toward pure tones while subjects kept their head in different vertical positions. It appeared that each tone was localized at a fixed, frequency-dependent elevation in space that shifted to a limited extent with changes in head elevation. Hence information about head position is used under static conditions too. Interestingly, the influence of head position also dep
    corecore