80 research outputs found

    Binocular video head impulse test: Normative data study

    Get PDF
    IntroductionThe video head impulse test (vHIT) evaluates the vestibulo-ocular reflex (VOR). It’s usually recorded from only one eye. Newer vHIT devices allow a binocular quantification of the VOR.Purpose (Aim)To investigate the advantages of simultaneously recorded binocular vHIT (bvHIT) to detect the differences between the VOR gains of the adducting and the abducting eye, to define the most precise VOR measure, and to assess gaze dys/conjugacy. We aimed to establish normative values for bvHIT adducting/abducting eye VOR gains and to introduce the VOR dysconjugacy ratio (vorDR) between adducting and abducting eyes for bvHIT.MethodsWe enrolled 44 healthy adult participants in a cross-sectional, prospective study using a repeated-measures design to assess test–retest reliability. A binocular EyeSeeCam Sci 2 device was used to simultaneously record bvHIT from both eyes during impulsive head stimulation in the horizontal plane.ResultsPooled bvHIT retest gains of the adducting eye significantly exceeded those of the abducting eye (mean (SD): 1.08 (SD = 0.06), 0.95 (SD = 0.06), respectively). Both adduction and abduction gains showed similar variability, suggesting comparable precision and therefore equal suitability for VOR asymmetry assessment. The pooled vorDR here introduced to bvHIT was 1.13 (SD = 0.05). The test–retest repeatability coefficient was 0.06.ConclusionOur study provides normative values reflecting the conjugacy of eye movement responses to horizontal bvHIT in healthy participants. The results were similar to a previous study using the gold-standard scleral search coil, which also reported greater VOR gains in the adducting than in the abducting eye. In analogy to the analysis of saccade conjugacy, we propose the use of a novel bvHIT dysconjugacy ratio to assess dys/conjugacy of VOR-induced eye movements. In addition, to accurately assess VOR asymmetry, and to avoid directional gain preponderance between adduction and abduction VOR-induced eye movements leading to monocular vHIT bias, we recommend using a binocular ductional VOR asymmetry index that compares the VOR gains of only the abduction or only the adduction movements of both eyes

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation

    Get PDF
    Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce \textit{VOR depth estimation}, a method based on the vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth

    A Covered Eye Fails To Follow an Object Moving in Depth

    Get PDF
    To clearly view approaching objects, the eyes rotate inward (vergence), and the intraocular lenses focus (accommodation). Current ocular control models assume both eyes are driven by unitary vergence and unitary accommodation commands that causally interact. The models typically describe discrete gaze shifts to non-accommodative targets performed under laboratory conditions. We probe these unitary signals using a physical stimulus moving in depth on the midline while recording vergence and accommodation simultaneously from both eyes in normal observers. Using monocular viewing, retinal disparity is removed, leaving only monocular cues for interpreting the object\u27s motion in depth. The viewing eye always followed the target\u27s motion. However, the occluded eye did not follow the target, and surprisingly, rotated out of phase with it. In contrast, accommodation in both eyes was synchronized with the target under monocular viewing. The results challenge existing unitary vergence command theories, and causal accommodation-vergence linkage

    Gaze stabilization in the rabbit : three-dimensional organization and cholinergic floccular control

    Get PDF
    Whereas a large body of knowledge is available on the rabbit's optokinetic responses about a vertical axis, only fragmentary data have been obtained about horizontal-axis optokinetic responses. With emerging knowledge on the spatial organization of the three dimensional visual messages in the flocculus there is a need for a detailed description of three-dimensional optokinetic responses. We conducted a behavioral study of three-dimensional eye movements, elicited by optokinetic stimulation about horizontal axes, which will be presented in Chapter 2 of this thesis. Chapter 3 descnbes the positive modulatory effects of floccular injection of the cholinergic agonist carbachol and the AChE inhibitor eserine on the OKR and the VOR. A possible mechanism for the positive action of carbachol is proposed in Chapter 4, in the context of a synergistic action between injections of carbachol and the ,8-noradrenergic agonist isoproterenol. Specification of the receptor type involved in the action of carbachol is attempted in Chapter 5. The effects of bilateral and unilateral injections of carbachol on optokinetic nystagmus and afternystagmus are presented in Chapters 6 and 7, whereas Chapter 8 descnbes the effect of bilateral injection of carbachol on vestibular, post-rotatory nystagmus

    Engineering Data Compendium. Human Perception and Performance, Volume 1

    Get PDF
    The concept underlying the Engineering Data Compendium was the product an R and D program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 1, which contains sections on Visual Acquisition of Information, Auditory Acquisition of Information, and Acquisition of Information by Other Senses

    Visual pursuit behavior in mice maintains the pursued prey on the retinal region with least optic flow

    Get PDF
    Mice have a large visual field that is constantly stabilized by vestibular ocular reflex (VOR) driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining a mouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the prey image consistently falls within a small area of the VOR-stabilized visual field. This functional focus coincides with the region of minimal optic flow within the visual field and consequently area of minimal motion-induced image-blur, as during pursuit mice ran directly toward the prey. The functional focus lies in the upper-temporal part of the retina and coincides with the reported high density-region of Alpha-ON sustained retinal ganglion cells.Mice have a lot to keep an eye on. To survive, they need to dodge predators looming on land and from the skies, while also hunting down the small insects that are part of their diet. To do this, they are helped by their large panoramic field of vision, which stretches from behind and over their heads to below their snouts. To stabilize their gaze when they are on the prowl, mice reflexively move their eyes to counter the movement of their head: in fact, they are unable to move their eyes independently. This raises the question: what part of their large visual field of view do these rodents use when tracking a prey, and to what advantage? This is difficult to investigate, since it requires simultaneously measuring the eye and head movements of mice as they chase and capture insects. In response, Holmgren, Stahr et al. developed a new technique to record the precise eye positions, head rotations and prey location of mice hunting crickets in surroundings that were fully digitized at high resolution. Combining this information allowed the team to mathematically recreate what mice would see as they chased the insects, and to assess what part of their large visual field they were using. This revealed that, once a cricket had entered any part of the mices large field of view, the rodents shifted their head - but not their eyes - to bring the prey into both eye views, and then ran directly at it. If the insect escaped, the mice repeated that behavior. During the pursuit, the crickets position was mainly held in a small area of the mouses view that corresponds to a specialized region in the eye which is thought to help track objects. This region also allowed the least motion-induced image blur when the animals were running forward. The approach developed by Holmgren, Stahr et al. gives a direct insight into what animals see when they hunt, and how this constantly changing view ties to what happens in the eyes. This method could be applied to other species, ushering in a new wave of tools to explore what freely moving animals see, and the relationship between behaviour and neural circuitry

    An Investigation and Analysis of the Vestibulo-Ocular Reflex in a Vibration Environment

    Get PDF
    Forty years of innovations have greatly improved Helmet-Mounted Displays (HMDs) and their integration into military systems. However, a significant issue with HMDs is the effect of vibration and the associated Vestibulo-Ocular Reflex (VOR). When a human’s head is subject to low-frequency vibration, the VOR stabilizes the eye with respect to objects in the external environment. However, this response is inappropriate in HMDs as the display moves with the user’s head and the VOR blurs the image as it is projected on the human retina. Current compensation techniques suggest increasing the size of displayed graphics or text to compensate for the loss of perceived resolution, which reduces the benefit of advanced high-definition HMDs. While limited research has been done on the VOR in real world settings, this research sought to understand and describe the VOR in the presence of head slued imagery as a function of whole body low-frequency vibration. An experimental HMD was designed and developed to allow a user to perform visual tasks, while also recording and tracking eye movements via video recording and EOG. A human subject experiment was executed to collect initial data on the effect of vibration on eye movements while performing simple tasks chosen to isolate specific eye motions. The results indicate that when fixating on a stationary target, the magnitude of eye movement was greatest at 4-6 Hz of, before steadily decreasing beyond this range. The addition of motion to this target increased the magnitude at 4-6 Hz. The findings are consistent with previous research, which has found a decline in visual performance in this frequency range
    • …
    corecore