1,956 research outputs found

    A kinematic model for 3-D head-free gaze-shifts

    Get PDF

    Development of the Vertebral Joints (C3 through T2) in Man

    Get PDF

    Computational Study of Multisensory Gaze-Shift Planning

    Get PDF
    In response to appearance of multimodal events in the environment, we often make a gaze-shift in order to focus the attention and gather more information. Planning such a gaze-shift involves three stages: 1) to determine the spatial location for the gaze-shift, 2) to find out the time to initiate the gaze-shift, 3) to work out a coordinated eye-head motion to execute the gaze-shift. There have been a large number of experimental investigations to inquire the nature of multisensory and oculomotor information processing in any of these three levels separately. Here in this thesis, we approach this problem as a single executive program and propose computational models for them in a unified framework. The first spatial problem is viewed as inferring the cause of cross-modal stimuli, whether or not they originate from a common source (chapter 2). We propose an evidence-accumulation decision-making framework, and introduce a spatiotemporal similarity measure as the criterion to choose to integrate the multimodal information or not. The variability of report of sameness, observed in experiments, is replicated as functions of the spatial and temporal patterns of target presentations. To solve the second temporal problem, a model is built upon the first decision-making structure (chapter 3). We introduce an accumulative measure of confidence on the chosen causal structure, as the criterion for initiation of action. We propose that gaze-shift is implemented when this confidence measure reaches a threshold. The experimentally observed variability of reaction time is simulated as functions of spatiotemporal and reliability features of the cross-modal stimuli. The third motor problem is considered to be solved downstream of the two first networks (chapter 4). We propose a kinematic strategy that coordinates eye-in-head and head-on-shoulder movements, in both spatial and temporal dimensions, in order to shift the line of sight towards the inferred position of the goal. The variabilities in contributions of eyes and head movements to gaze-shift are modeled as functions of the retinal error and the initial orientations of eyes and head. The three models should be viewed as parts of a single executive program that integrates perceptual and motor processing across time and space

    How do treadmill speed and terrain visibility influence neuromuscular control of guinea fowl locomotion?

    Get PDF
    Locomotor control mechanisms must flexibly adapt to both anticipated and unexpected terrain changes to maintain movement and avoid a fall. Recent studies revealed that ground birds alter movement in advance of overground obstacles, but not treadmill obstacles, suggesting context-dependent shifts in the use of anticipatory control. We hypothesized that differences between overground and treadmill obstacle negotiation relate to differences in visual sensory information, which influence the ability to execute anticipatory manoeuvres. We explored two possible explanations: (1) previous treadmill obstacles may have been visually imperceptible, as they were low contrast to the tread, and (2) treadmill obstacles are visible for a shorter time compared with runway obstacles, limiting time available for visuomotor adjustments. To investigate these factors, we measured electromyographic activity in eight hindlimb muscles of the guinea fowl (Numida meleagris, N=6) during treadmill locomotion at two speeds (0.7 and 1.3 m s−1) and three terrain conditions at each speed: (i) level, (ii) repeated 5 cm low-contrast obstacles (90% contrast, black/white). We hypothesized that anticipatory changes in muscle activity would be higher for (1) high-contrast obstacles and (2) the slower treadmill speed, when obstacle viewing time is longer. We found that treadmill speed significantly influenced obstacle negotiation strategy, but obstacle contrast did not. At the slower speed, we observed earlier and larger anticipatory increases in muscle activity and shifts in kinematic timing. We discuss possible visuomotor explanations for the observed context-dependent use of anticipatory strategies

    Motor Control of Rapid Eye Movements in Larval Zebrafish

    Get PDF
    Animals move the same body parts in diverse ways. How the central nervous system executes one action over related ones is poorly understood. To investigate this, I assessed the behavioural manifestation and neural control of saccadic eye rotations made by larval zebrafish, since these movements are simple and easy to investigate at a circuit level. I first classified the larva’s saccadic repertoire into 5 types, of which hunting specific convergent saccades and exploratory conjugate saccades were the main types used to orient vision. Convergent and conjugate saccades shared a nasal eye rotation, which had kinematic differences and similarities that suggested the rotation was made by overlapping but distinct populations of neurons between saccade types. I investigated this further, using two-photon Ca2+ imaging and selective circuit interventions to identify a circuit from rhombomere 5/6 to abducens internuclear neurons to motoneurons that was crucial to nasal eye rotations. Motoneurons had distinct activity patterns for convergent and conjugate saccades that were consistent with my behavioural observations and were explained largely by motoneuron kinematic tuning preferences. Surprisingly, some motoneurons also modulated activity according to saccade type independent of movement kinematics. In contrast, pre-synaptic internuclear neuron activity profiles were almost entirely explained by movement kinematics, but not neurons in rhombomere 5/6, which had mixed saccade type and kinematic encoding, like motoneurons. Regions exerting descending control on this circuit from the optic tectum and anterior pretectal nucleus had few neurons tuned to saccade kinematics compared to neurons selective for convergent saccades. My results suggest a transformation from encoding action type to encoding movement kinematics at successive circuit levels. This transformation was not monotonic or complete, and suggests that control of even simple, highly comparable, movements cannot be entirely described by a shared kinematic encoding scheme at a motor or premotor level

    Learning the Optimal Control of Coordinated Eye and Head Movements

    Get PDF
    Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements

    Excitatory postsynaptic potentials in rat neocortical neurons in vitro. III. Effects of a quinoxalinedione non-NMDA receptor antagonist

    Get PDF
    1. Intracellular microelectrodes were used to obtain recordings from neurons in layer II/III of rat frontal cortex. A bipolar electrode positioned in layer IV of the neocortex was used to evoke postsynaptic potentials. Graded series of stimulation were employed to selectively activate different classes of postsynaptic responses. The sensitivity of postsynaptic potentials and iontophoretically applied neurotransmitters to the non-N-methyl-D-asparate (NMDA) antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX) was examined. 2. As reported previously, low-intensity electrical stimulation of cortical layer IV evoked short-latency early excitatory postsynaptic potentials (eEPSPs) in layer II/III neurons. CNQX reversibly antagonized eEPSPs in a dose-dependent manner. Stimulation at intensities just subthreshold for activation of inhibitory postsynaptic potentials (IPSPs) produced long-latency (10 to 40-ms) EPSPs (late EPSPs or 1EPSPs). CNQX was effective in blocking 1EPSPs. 3. With the use of stimulus intensities at or just below threshold for evoking an action potential, complex synaptic potentials consisting of EPSP-IPSP sequences were observed. Both early, Cl(-)-dependent and late, K(+)-dependent IPSPs were reduced by CNQX. This effect was reversible on washing. This disinhibition could lead to enhanced excitability in the presence of CNQX. 4. Iontophoretic application of quisqualate produced a membrane depolarization with superimposed action potentials, whereas NMDA depolarized the membrane potential and evoked bursts of action potentials. At concentrations up to 5 microM, CNQX selectively antagonized quisqualate responses. NMDA responses were reduced by 10 microM CNQX. D-Serine (0.5-2 mM), an agonist at the glycine regulatory site on the NMDA receptor, reversed the CNQX depression of NMDA responses

    Eye-to-face Gaze in Stuttered Versus Fluent Speech

    Get PDF
    The present study investigated the effects of viewing audio-visual presentations of stuttered relative to fluent speech samples on the ocular reactions of participants. Ten adults, 5 males and 5 females, aged 18-55 who had a negative history of any speech, language and hearing disorders participated in the study. Participants were shown three 30 second audio-visual recordings of stuttered speech, and three 30 second audio-visual recordings of fluent speech, with a three second break (black screen) between the presentation of each video. All three individuals who stutter were rated as ‘severe’ (SSI-3, Riley, 1994), exhibiting high levels of struggle filled with overt stuttering behaviors such as repetitions, prolongations and silent postural fixations on speech sounds, in addition to tension-filled secondary behaviors such as head jerks, lip protrusion, and facial grimaces. During stuttered and fluent conditions, ocular behaviors of the viewers including pupillary movement, fixation time, eye-blink, and relative changes in pupil diameter were recorded using the Arrington ViewPoint Eye-Tracker infrared camera and the system’s data analysis software (e.g., Wong & Cronin-Colomb & Neargarder, 2005) via a 2.8GHz Dell Optiplex GX270 computer. For all ocular measures except fixation time, there were significant (p\u3c.05) differences for stuttered relative to fluent speech. There was an increase in the number of pupillary movements, blinks, and relative change in pupil diameter and a decrease in time fixated when viewing stuttered relative to fluent speech samples. While not significant, participants fixated or directed their attention for less time during stuttered than fluent conditions, indicating decreased attention overall during stuttered speech samples. Increases in eye-blink data and pupil-dilation data were also significant. Because both eye-blink, as a measure of the startle reflex, and pupil-dilation are resistant to voluntary control or are completely under the control of the autonomic nervous system, significant increases in both for stuttered relative to fluent speech indicate a visceral reaction to stuttering

    A Sensorimotor Model for Computing Intended Reach Trajectories

    Get PDF
    The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints

    Eye-Head-Hand Coordination During Visually Guided Reaches in Head-Unrestrained Macaques

    Get PDF
    Our goal was to determine if reaching influences eye-head coordination (and vice versa) in Rhesus macaques. Eye, head, and hand motion were recorded in two animals using search coil and touch screen technology, respectively. Animals were seated in a customized chair which allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at one of 15 locations. In other variants, initial hand or gaze position were varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy. In the reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly larger velocities and final ranges of head position compared with the gaze control
    • …
    corecore