216 research outputs found
MR-Eyetracker: a new method for eye movement recording in functional magnetic resonance imaging
We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2° at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI
Relationship between saccadic eye movements and cortical activity as measured by fMRI: quantitative and qualitative aspects
We investigated the quantitative relationship between saccadic activity (as reflected in frequency of occurrence and amplitude of saccades) and blood oxygenation level dependent (BOLD) changes in the cerebral cortex using functional magnetic resonance imaging (fMRI). Furthermore, we investigated quantitative changes in cortical activity associated with qualitative changes in the saccade task for comparable levels of saccadic activity. All experiments required the simultaneous acquisition of eye movement and fMRI data. For this purpose we used a new high-resolution limbus-tracking technique for recording eye movements in the magnetic resonance tomograph. In the first two experimental series we varied both frequency and amplitude of saccade stimuli (target jumps). In the third series we varied task difficulty; subjects performed either pro-saccades or anti-saccades. The brain volume investigated comprised the frontal and supplementary eye fields, parietal as well as striate cortex, and the motion sensitive area of the parieto-occipital cortex. All these regions showed saccade-related BOLD responses. The responses in these regions were highly correlated with saccade frequency, indicating that repeated processing of saccades is integrated over time in the BOLD response. In contrast, there was no comparable BOLD change with variation of saccade amplitude. This finding speaks for a topological rather than activity-dependent coding of saccade amplitudes in most cortical regions. In the experiments comparing pro- vs anti-saccades we found higher BOLD activation in the "anti" task than in the "pro" task. A comparison of saccade parameters revealed that saccade frequency and cumulative amplitude were comparable between the two tasks, whereas reaction times were longer in the "anti" task than the pro task. The latter finding is taken to indicate a more demanding cortical processing in the "anti" task than the "pro" task, which could explain the observed difference in BOLD activation. We hold that a quantitative analysis of saccade parameters (especially saccade frequency and latency) is important for the interpretation of the BOLD changes observed with visual stimuli in fMRI
Brain imaging in a patient with hemimicropsia
Hemimicropsia is an isolated misperception of the size of objects in one hemifield (objects appear smaller) which is, as a phenomenon of central origin, very infrequently reported in literature. We present a case of hemimicropsia as a selective deficit of size and distance perception in the left hemifield without hemianopsia caused by a cavernous angioma with hemorrhage in the right occipitotemporal area. The symptom occurred only intermittently and was considered the consequence of a local irritation by the hemorrhage. Imaging data including a volume-rendering MR data set of the patient’s brain were transformed to the 3-D stereotactic grid system by Talairach and warped to a novel digital 3-D brain atlas. Imaging analysis included functional MRI (fMRI) to analyse the patient’s visual cortex areas (mainly V5) in relation to the localization of the hemangioma to establish physiological landmarks with respect to visual stimulation.
The lesion was localized in the peripheral visual association cortex, Brodmann area (BA) 19, adjacent to BA 37, both of which are part of the occipitotemporal visual pathway. Additional psychophysical measurements revealed an elevated threshold for perceiving coherent motion. which we relate to a partial loss of function in V5, a region adjacent to the cavernoma.
In our study, we localized for the first time a cerebral lesion causing micropsia by digital mapping in Talairach space using a 3-D brain atlas and topologically related it to fMRI data for visual motion. The localization of the brain lesion affecting BA 19 and the occipitotemporal visual pathway is discussed with respect to experimental and case report findings about the neural basis of object size perception
Representation of Neck Velocity and Neck–Vestibular Interactions in Pursuit Neurons in the Simian Frontal Eye Fields
The smooth pursuit system must interact with the vestibular system to maintain the accuracy of eye movements in space (i.e., gaze-movement) during head movement. Normally, the head moves on the stationary trunk. Vestibular signals cannot distinguish whether the head or whole body is moving. Neck proprioceptive inputs provide information about head movements relative to the trunk. Previous studies have shown that the majority of pursuit neurons in the frontal eye fields (FEF) carry visual information about target velocity, vestibular information about whole-body movements, and signal eye- or gaze-velocity. However, it is unknown whether FEF neurons carry neck proprioceptive signals. By passive trunk-on-head rotation, we tested neck inputs to FEF pursuit neurons in 2 monkeys. The majority of FEF pursuit neurons tested that had horizontal preferred directions (87%) responded to horizontal trunk-on-head rotation. The modulation consisted predominantly of velocity components. Discharge modulation during pursuit and trunk-on-head rotation added linearly. During passive head-on-trunk rotation, modulation to vestibular and neck inputs also added linearly in most neurons, although in half of gaze-velocity neurons neck responses were strongly influenced by the context of neck rotation. Our results suggest that neck inputs could contribute to representing eye- and gaze-velocity FEF signals in trunk coordinates
The Modernization of the Autopsy: Application of Ultrastructural and Biochemical Methods to Human Disease
The autopsy has provided, and still provides, the stimulus for many attempts to reproduce disease in experimental animal models. This approach has become increasingly difficult, however, in the case of human disease, principally shock. The study of some pathological states in animal models requires testing in several species and final confirmation in man before this knowledge can be applied to living patients. In our studies the application of cell biology techniques at autopsy has permitted the generation of new hypotheses which are more amenable to further exploration in experimental models and can be more precisely related to human disease
Self versus Environment Motion in Postural Control
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results
Vestibular signal processing in a subject with somatosensory deafferentation: The case of sitting posture
<p>Abstract</p> <p>Background</p> <p>The vestibular system of the inner ear provides information about head translation/rotation in space and about the orientation of the head with respect to the gravitoinertial vector. It also largely contributes to the control of posture through vestibulospinal pathways. Testing an individual severely deprived of somatosensory information below the nose, we investigated if equilibrium can be maintained while seated on the sole basis of this information.</p> <p>Results</p> <p>Although she was unstable, the deafferented subject (DS) was able to remain seated with the eyes closed in the absence of feet, arm and back supports. However, with the head unconsciously rotated towards the left or right shoulder, the DS's instability markedly increased. Small electrical stimulations of the vestibular apparatus produced large body tilts in the DS contrary to control subjects who did not show clear postural responses to the stimulations.</p> <p>Conclusion</p> <p>The results of the present experiment show that in the lack of vision and somatosensory information, vestibular signal processing allows the maintenance of an active sitting posture (i.e. without back or side rests). When head orientation changes with respect to the trunk, in the absence of vision, the lack of cervical information prevents the transformation of the head-centered vestibular information into a trunk-centered frame of reference of body motion. For the normal subjects, this latter frame of reference enables proper postural adjustments through vestibular signal processing, irrespectively of the orientation of the head with respect to the trunk.</p
- …