18 research outputs found

    Design, Assembly, Calibration, and Measurement of an Augmented Reality Haploscope

    Full text link
    A haploscope is an optical system which produces a carefully controlled virtual image. Since the development of Wheatstone's original stereoscope in 1838, haploscopes have been used to measure perceptual properties of human stereoscopic vision. This paper presents an augmented reality (AR) haploscope, which allows the viewing of virtual objects superimposed against the real world. Our lab has used generations of this device to make a careful series of perceptual measurements of AR phenomena, which have been described in publications over the previous 8 years. This paper systematically describes the design, assembly, calibration, and measurement of our AR haploscope. These methods have been developed and improved in our lab over the past 10 years. Despite the fact that 180 years have elapsed since the original report of Wheatstone's stereoscope, we have not previously found a paper that describes these kinds of details.Comment: Accepted and presented at the IEEE VR 2018 Workshop on Perceptual and Cognitive Issues in AR (PERCAR); pre-print versio

    The Effect of an Occluder on the Accuracy of Depth Perception in Optical See-Through Augmented Reality

    Get PDF
    Three experiments were conducted to study the effect of an occluder on the accuracy of nearield depth perception in optical-see-through augmented reality (AR). The first experiment was a duplicate experiment of the one in Edwards et al. [2004]. We found more accurate results than Edwards et al.’s work and did not find the occluder’s main effect or its two-way interaction effect with distance on the accuracy of observers’ depth matching. The second experiment was an updated version of the first one using a within-subject design and a more accurate calibration method. The results were that errors ranged from –5 to 3 mm when the occluder was present, –3 to 2 mm when the occluder was absent, and observers judged the virtual object to be closer after the presentation of the occluder. The third experiment was conducted on three subjects who were depth perception researchers. The result showed significant individual effects

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Augmented reality fonts with enhanced out-of-focus text legibility

    Get PDF
    In augmented reality, information is often distributed between real and virtual contexts, and often appears at different distances from the viewer. This raises the issues of (1) context switching, when attention is switched between real and virtual contexts, (2) focal distance switching, when the eye accommodates to see information in sharp focus at a new distance, and (3) transient focal blur, when information is seen out of focus, during the time interval of focal distance switching. This dissertation research has quantified the impact of context switching, focal distance switching, and transient focal blur on human performance and eye fatigue in both monocular and binocular viewing conditions. Further, this research has developed a novel font that when seen out-of-focus looks sharper than standard fonts. This SharpView font promises to mitigate the effect of transient focal blur. Developing this font has required (1) mathematically modeling out-of-focus blur with Zernike polynomials, which model focal deficiencies of human vision, (2) developing a focus correction algorithm based on total variation optimization, which corrects out-of-focus blur, and (3) developing a novel algorithm for measuring font sharpness. Finally, this research has validated these fonts through simulation and optical camera-based measurement. This validation has shown that, when seen out of focus, SharpView fonts are as much as 40 to 50% sharper than standard fonts. This promises to improve font legibility in many applications of augmented reality

    Perceived location of virtual content measurement method in optical see through augmented reality

    Get PDF
    An important research question for optical see through AR is, “how accurately and precisely can a virtual object’s perceived location be measured in three dimensional space?” Previously, a method was developed for measuring the perceived 3D location of virtual objects using Microsoft HoloLens1 display. This study found an unexplained rightward perceptual bias on horizontal plane; most participants were right eye dominant, and consistent with the hypothesis that perceived location is biased in eye dominance direction. In this thesis, a replication study is reported, which includes binocular and monocular viewing conditions, recruits an equal number of left and right eye dominant participants, uses Microsoft HoloLens2 display. This replication study examined whether the perceived location of virtual objects is biased in the direction of dominant eye. Results suggest that perceived location is not biased in the direction of dominant eye. Compared to previous study’s findings, overall perceptual accuracy increased, and precision was similar

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window

    The adaptive elements of disparity vergence: Dynamics and directional asymmetries.

    Get PDF
    Vergence eye movements alter the angle between the two visual axes, creating changes in binocular fixation distance. They are primarily stimulated by retinal image disparities, but can also be driven by inputs from ocular accommodation (accommodative-vergence) and perceived proximity (size) changes. Because of these diverse and complex sensory inputs, the neuro-motor substrates that sub-serve vergence control possess robust adaptive capabilities to manage the interactions with other oculomotor systems (accommodation). This adaptive plasticity in vergence allows for a high degree of precision in binocular alignment to be maintained throughout life in the face of constantly changing environmental demands. The precise alignment of each eyes’ fovea is a fundamental requirement for stereopsis and the perception of depth in 3 dimensions. In a significant portion of the ophthalmic clinical population, the adaptive capacities of vergence are reduced or dysfunctional, leading to difficulties focusing clearly and comfortably at near distances such as books, computer screens and other hand-held devices. Furthermore, new wearable technologies such as virtual and augmented reality increase the demand on the adaptive capacities of vergence by drastically altering the congruency of the sensory inputs to vergence. Currently, our understanding of the mechanisms that underlie this adaptive control and their behavioral limits are limited. This knowledge gap has led to conjecture in the literature regarding proper rehabilitative therapies for clinical dysfunctions of vergence control and in the optimal environmental design parameters that should provide comfortable and compelling user experiences in wearable technologies like VR and AR. The inward (convergence) and the outward (divergence) turning of the eyes in response to retinal disparities are controlled by two separate systems and demonstrate significant directional asymmetries in their reflexive response properties. In general, reflexive divergence responses tend to be slower and longer than their convergence counter-parts. It is unclear whether the adaptive mechanisms are influence by these reflexive asymmetries. It is also unknown whether similar directional differences exist in the different adaptive capacities possessed by vergence. The purpose of the following dissertation was to characterize the effects of stimulus direction on the adaptive behavior of disparity-driven vergence eye movements with an end goal aimed at improving rehabilitation therapies for clinical populations with vergence dysfunction and providing valuable insight for the design and future development of wearable technologies like virtual and augmented reality environments. A series of 4 experiments were conducted in order to characterize the effect of stimulus direction and the physiological limits of the adaptive behavior within the two-main disparity vergence motor controllers, fast-phasic and slow-tonic. In each study, binocular viewing conditions were dichoptic, which allowed retinal disparity to be altered while the accommodative and proximity cues were clamped. Such designs create incongruencies between the sensory stimuli to vergence and thus elicits a much stronger adaptive response for observation than would normally occur when viewing real-world objects. Eye movements were monitored binocularly with a video-based infrared eye-tracking system at 250Hz using the head-mounted EyeLink2 system. A total of 14 adult binocularly normal controls and 10 adult participants with dysfunctional convergence control (convergence insufficiency) were recruited for the main studies. 4 controls completed the first two studies, 10 additional controls completed the third and fourth studies while the 10 participants with convergence insufficiency completed the fourth study. The results of this dissertation make four significant contributions to the current scientific literature pertaining to vergence oculomotor control and plasticity. 1) Both fast-phasic and slow-tonic vergence controllers display directional asymmetries in their general behavior and adaptive responses. 2) Reflexive fast-phasic divergence responses in controls tend to saturate at lower disparity-stimulus amplitudes than convergence under specific viewing conditions. This saturation limit is defined when the primary vergence response amplitude and peak velocity are unable to increase when the stimulus amplitude increases, suggesting saturation in neural recruitment and firing rates. Saturated reflexive vergence responses instead recruit an increased response duration (neural firing time) in order to produce larger amplitude responses. 3) Saturation in the fast-phasic divergence mechanism leads to saturation in the speed slow-tonic vergence adaptation. The function of the underlying reflexive fast-phasic response was found to be associated with the adaptive behavior of the slow-tonic mechanism, suggesting one drives the other, which is consistent with model predictions. 4) Convergence responses from individuals with convergence insufficiency are generally indistinguishable from that of the slower divergence responses of controls. These impaired convergence responses lead to impairment of the adaptive mechanisms underlying each fast-phasic and slow-tonic controller. Clinically, these results suggest that rehabilitative therapies for vergence control dysfunctions should primarily target the performance of the fast-phasic reflexive vergence mechanism. This work also suggests that improvements in adaptive capacities of vergence, known to be the mechanism under-pinning symptom reduction in these patient populations, should follow when reflexive fast-phasic responses are normalized. In terms of wearable technology, the generally limited adaptive plasticity demonstrated within divergence responses when compared to convergence in controls, provides a behavioral explanation for the increase in symptoms of discomfort when viewing distant objects in virtual reality environments. Future investigations should seek to determine the effects of other disparity stimulus parameters, such as contrast and spatial frequency on the adaptive behaviors of both fast-phasic and slow-tonic mechanism. Finally, the cerebellum is known to be central to the adaptation of almost every motor system and yet its role in the different adaptive capacities of disparity-vergence control remain unclear. Future studies should aim to characterize these neural structures role in the different vergence oculomotor adaptive mechanisms described here

    Research and technology, 1992

    Get PDF
    Selected research and technology activities at Ames Research Center, including the Moffett Field site and the Dryden Flight Research Facility, are summarized. These activities exemplify the Center's varied and productive research efforts for 1992
    corecore