1,406,393 research outputs found
Input Sources and Properties of Position-Sensitive Oculomotor Fibres in the Rock Lobster, Panulirus Interruptus (Randall)
Sets of head-up, head-down, eye-up and eye-down motor fibres were studied in the oculomotor nerve of the rock lobster. An eye-withdrawal fibre was also investigated.
Apart from the statocyst input, light distribution on the eyes has the strongest influence on the position-sensitive fibres. Weaker optokinetic input from moving targets is also present.
Strongly habituating input is obtained from the antennal joints. This input causes orientation of the eye toward the direction in which the antenna points.
The same antennule movement in the vertical plane can result in either excitation or inhibition of the head-down fibre, suggesting the presence of two opposing inputs, presumably from the statocysts and basal joint receptors of the antennule.
The inputs on to the position-sensitive fibres which indicate body position are such as to stabilize the eye position in space during body movement. The optokinetic and antennal joint inputs are probably involved in tracking and antennal pointing reactions.
The eye-withdrawal fibre is stimulated by touch of the head and around the eye, but is inhibited by the excited state
Mechanisms of Action and Targets of Nitric Oxide in the Oculomotor System
Nitric oxide (NO) production by neurons in the prepositus hypoglossi (PH) nucleus is necessary for the normal performance of eye movements in alert animals. In this study, the mechanism(s) of action of NO in the oculomotor system has been investigated. Spontaneous and vestibularly induced eye movements were recorded in alert cats before and after microinjections in the PH nucleus of drugs affecting the NO–cGMP pathway. The cellular sources and targets of NO were also studied by immunohistochemical detection of neuronal NO synthase (NOS) and NO-sensitive guanylyl cyclase, respectively. Injections of NOS inhibitors produced alterations of eye velocity, but not of eye position, for both spontaneous and vestibularly induced eye movements, suggesting that NO produced by PH neurons is involved in the processing of velocity signals but not in the eye position generation. The effect of neuronal NO is probably exerted on a rich cGMP-producing neuropil dorsal to the nitrergic somas in the PH nucleus. On the other hand, local injections of NO donors or 8-Br-cGMP produced alterations of eye velocity during both spontaneous eye movements and vestibulo-ocular reflex (VOR), as well as changes in eye position generation exclusively during spontaneous eye movements. The target of this additional effect of exogenous NO is probably a well defined group of NO-sensitive cGMP-producing neurons located between the PH and the medial vestibular nuclei. These cells could be involved in the generation of eye position signals during spontaneous eye movements but not during the VOR.Fondo de Investigación Sanitaria Grants 94/0388 and 97/2054Comunidad Autónoma de Madrid Grant 08.5/0019/1997Dirección General de Investigación Científica y Technológica Grant PB 93–117
Retinally stabilized differential resolution television display
A remote television viewing system employing an eye tracker is disclosed, wherein a small region of the image appears in high resolution, and the remainder of the image appears in low resolution. The eye tracker monitors the position of the viewer's line of sight. The eye tracker position data is transmitted to the remote television camera and control. Both the remote camera and television display are adapted to have selectable high-resolution and low resolution raster scan modes. The position data from the eye tracker is used to determine the point at which the high-resolution scan is to commence. The video data defining the observed image is encoded in a novel format, wherein in each data field, the data representing the position of the high resolution region of predetermined size appears first, followed by the high resolution zone video data and then the low-resolution region data. As the viewer's line of sight relative to the displayed image changes, the position of the high resolution region changes to track the viewer's line of sight
Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position
A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309
EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays
While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays
Eye position representation in human anterior parietal cortex
Eye position helps locate visual targets relative to one's own body and modulates the distribution of attention in visual space. Whereas in the monkey, proprioceptive eye position signals have been recorded in the somatosensory cortex, in humans, no brain site has yet been associated with eye position. We aimed to disrupt the proprioceptive representation of the right eye in the left somatosensory cortex, presumably located near the representation of the right hand, using repetitive transcranial magnetic stimulation (rTMS). Head-fixed subjects reported their perceived visual straight-ahead position using both left and right eye monocular vision, before and after 15 min of 1 Hz rTMS. rTMS over left somatosensory but not over left motor cortex shifted the perceived visual straight ahead to the left, whereas nonvisual detection of body midline was unchanged for either brain area. These results can be explained by the underestimation of the angle of gaze of the right eye when fixating the target. To link this effect more tightly to an altered ocular proprioception, we applied a passive deviation to the right eye before the visual straight-ahead task. Passive eye displacement modulated the shift in the perceived straight ahead induced by somatosensory rTMS, without affecting the perceived straight ahead at baseline or after motor cortex rTMS. We conclude that the anterior parietal cortex in humans encodes eye position and that this signal has a proprioceptive component
Eye position modulates retinotopic responses in early visual areas: a bias for the straight-ahead direction
Even though the eyes constantly change position, the location of a stimulus can be accurately represented by a population of neurons with retinotopic receptive fields modulated by eye position gain fields. Recent electrophysiological studies, however, indicate that eye position gain fields may serve an additional function since they have a non-uniform spatial distribution that increases the neural response to stimuli in the straight-ahead direction. We used functional magnetic resonance imaging and a wide-field stimulus display to determine whether gaze modulations in early human visual cortex enhance the blood-oxygenation-level dependent (BOLD) response to stimuli that are straight-ahead. Subjects viewed rotating polar angle wedge stimuli centered straight-ahead or vertically displaced by ±20° eccentricity. Gaze position did not affect the topography of polar phase-angle maps, confirming that coding was retinotopic, but did affect the amplitude of the BOLD response, consistent with a gain field. In agreement with recent electrophysiological studies, BOLD responses in V1 and V2 to a wedge stimulus at a fixed retinal locus decreased when the wedge location in head-centered coordinates was farther from the straight-ahead direction. We conclude that stimulus-evoked BOLD signals are modulated by a systematic, non-uniform distribution of eye-position gain fields
Ocular counterrolling measured during eight hours of sustained body tilt
Adaptation of otolith organ activity was investigated by monitoring the ocular counterrolling response of four normal individuals and three persons with severe bilateral loss of labyrinthine function. Several eye photographs were recorded every 30 minutes during a period of 8 hours in which the subject was held in a lateral tilt (60 deg) position. The recorded eye roll position varied to an expected small extent within each test session; this variation about a given mean roll position was similar among the test sessions for all subjects. The mean roll position, on the other hand, changed from session to session in substantial amounts, but these changes appeared to be random with respect to time and among subjects. Furthermore, the intersessional variation in the mean torsional eye position of the normal subjects was equivalent to that of the labyrinthine-defective subjects who displayed little or no counterrolling. These results suggest that the human counterrolling response is maintained either by essentially nonadapting macular receptors or by extremely fine movements of the head in the gravitational field, such as may have been allowed by the biteboard/headrest restraint system used in this study, which served as an everchanging accelerative stimulus
A Relative Position Code for Saccades in Dorsal Premotor Cortex
Spatial computations underlying the coordination of the hand and eye present formidable geometric challenges. One way for the nervous system to simplify these computations is to directly encode the relative position of the hand and the center of gaze. Neurons in the dorsal premotor cortex (PMd), which is critical for the guidance of arm-reaching movements, encode the relative position of the hand, gaze, and goal of reaching movements. This suggests that PMd can coordinate reaching movements with eye movements. Here, we examine saccade-related signals in PMd to determine whether they also point to a role for PMd in coordinating visual–motor behavior. We first compared the activity of a population of PMd neurons with a population of parietal reach region (PRR) neurons. During center-out reaching and saccade tasks, PMd neurons responded more strongly before saccades than PRR neurons, and PMd contained a larger proportion of exclusively saccade-tuned cells than PRR. During a saccade relative position-coding task, PMd neurons encoded saccade targets in a relative position code that depended on the relative position of gaze, the hand, and the goal of a saccadic eye movement. This relative position code for saccades is similar to the way that PMd neurons encode reach targets. We propose that eye movement and eye position signals in PMd do not drive eye movements, but rather provide spatial information that links the control of eye and arm movements to support coordinated visual–motor behavior
- …
