29 research outputs found

    Linear ensemble-coding in midbrain superior colliculus specifies the saccade kinematics

    Get PDF
    Recently, we proposed an ensemble-coding scheme of the midbrain superior colliculus (SC) in which, during a saccade, each spike emitted by each recruited SC neuron contributes a fixed minivector to the gaze-control motor output. The size and direction of this ‘spike vector’ depend exclusively on a cell’s location within the SC motor map (Goossens and Van Opstal, in J Neurophysiol 95: 2326–2341, 2006). According to this simple scheme, the planned saccade trajectory results from instantaneous linear summation of all spike vectors across the motor map. In our simulations with this model, the brainstem saccade generator was simplified by a linear feedback system, rendering the total model (which has only three free parameters) essentially linear. Interestingly, when this scheme was applied to actually recorded spike trains from 139 saccade-related SC neurons, measured during thousands of eye movements to single visual targets, straight saccades resulted with the correct velocity profiles and nonlinear kinematic relations (‘main sequence properties– and ‘component stretching’) Hence, we concluded that the kinematic nonlinearity of saccades resides in the spatial-temporal distribution of SC activity, rather than in the brainstem burst generator. The latter is generally assumed in models of the saccadic system. Here we analyze how this behaviour might emerge from this simple scheme. In addition, we will show new experimental evidence in support of the proposed mechanism

    Optimal Control of Saccades by Spatial-Temporal Activity Patterns in the Monkey Superior Colliculus

    Get PDF
    A major challenge in computational neurobiology is to understand how populations of noisy, broadly-tuned neurons produce accurate goal-directed actions such as saccades. Saccades are high-velocity eye movements that have stereotyped, nonlinear kinematics; their duration increases with amplitude, while peak eye-velocity saturates for large saccades. Recent theories suggest that these characteristics reflect a deliberate strategy that optimizes a speed-accuracy tradeoff in the presence of signal-dependent noise in the neural control signals. Here we argue that the midbrain superior colliculus (SC), a key sensorimotor interface that contains a topographically-organized map of saccade vectors, is in an ideal position to implement such an optimization principle. Most models attribute the nonlinear saccade kinematics to saturation in the brainstem pulse generator downstream from the SC. However, there is little data to support this assumption. We now present new neurophysiological evidence for an alternative scheme, which proposes that these properties reside in the spatial-temporal dynamics of SC activity. As predicted by this scheme, we found a remarkably systematic organization in the burst properties of saccade-related neurons along the rostral-to-caudal (i.e., amplitude-coding) dimension of the SC motor map: peak firing-rates systematically decrease for cells encoding larger saccades, while burst durations and skewness increase, suggesting that this spatial gradient underlies the increase in duration and skewness of the eye velocity profiles with amplitude. We also show that all neurons in the recruited population synchronize their burst profiles, indicating that the burst-timing of each cell is determined by the planned saccade vector in which it participates, rather than by its anatomical location. Together with the observation that saccade-related SC cells indeed show signal-dependent noise, this precisely tuned organization of SC burst activity strongly supports the notion of an optimal motor-control principle embedded in the SC motor map as it fully accounts for the straight trajectories and kinematic nonlinearity of saccades

    Responses of single motor units in human masseter to transcranial magnetic stimulation of either hemisphere

    No full text
    The corticobulbar inputs to single masseter motoneurons from the contra- and ipsilateral motor cortex were examined using focal transcranial magnetic stimulation (TMS) with a figure-of-eight stimulating coil. Fine-wire electrodes were inserted into the masseter muscle of six subjects, and the responses of 30 motor units were examined. All were tested with contralateral TMS, and 87 % showed a short-latency excitation in the peristimulus time histogram at 7.0 ± 0.3 ms. The response was a single peak of 1.5 ± 0.2 ms duration, consistent with monosynaptic excitation via a single D- or I1-wave volley elicited by the stimulus. Increased TMS intensity produced a higher response probability (n = 13, paired t test, P < 0.05) but did not affect response latency. Of the remaining motor units tested with contralateral TMS, 7 % did not respond at intensities tested, and 7 % had reduced firing probability without any preceding excitation. Sixteen of these motor units were also tested with ipsilateral TMS and four (25 %) showed short-latency excitation at 6.7 ± 0.6 ms, with a duration of 1.5 ± 0.3 ms. Latency and duration of excitatory peaks for these four motor units did not differ significantly with ipsilateral vs. contralateral TMS (paired t tests, P > 0.05). Of the motor units tested with ipsilateral TMS, 56 % responded with a reduced firing probability without a preceding excitation, and 19 % did not respond. These data suggest that masseter motoneurons receive monosynaptic input from the motor cortex that is asymmetrical from each hemisphere, with most low threshold motoneurons receiving short-latency excitatory input from the contralateral hemisphere only

    Laughter Animation Generation

    No full text
    International audienceLaughter is an important communicative signal in human-human communication. It involves the whole body, from lip motion, facial expression, to rhythmic body and shoulder movement. Laughter is an important social signal in human-human interaction and may convey a wide range of meanings (extreme happiness, social bounding, politeness, irony,). To enhance human-machine interactions, efforts have been made to endow embodied conversational agents, ECAs, with laughing capabilities. Recently, motion capture technologies have been applied to record laughter behaviors including facial expressions and body movements. It allows investigating the temporal relationship of laughter behaviors in details. Based on the available data, researchers have made efforts to develop automatic generation models of laughter animations. These models control the multimodal behaviors of ECAs including lip motions, upper facial expressions, head rotations, shoulder shaking, and torso movements. The underlying idea of these works is to propose a statistical framework able to automatically capture the correlation between laughter audio and multimodal behaviors. In the synthesis phase, the captured correlation is rendered into synthesized animations according to laughter audio given in input. This chapter reviews existing works on automatic generation of laughter animation
    corecore