5,640 research outputs found

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    The Effects of Object Shape, Fidelity, Color, and Luminance on Depth Perception in Handheld Mobile Augmented Reality

    Full text link
    Depth perception of objects can greatly affect a user's experience of an augmented reality (AR) application. Many AR applications require depth matching of real and virtual objects and have the possibility to be influenced by depth cues. Color and luminance are depth cues that have been traditionally studied in two-dimensional (2D) objects. However, there is little research investigating how the properties of three-dimensional (3D) virtual objects interact with color and luminance to affect depth perception, despite the substantial use of 3D objects in visual applications. In this paper, we present the results of a paired comparison experiment that investigates the effects of object shape, fidelity, color, and luminance on depth perception of 3D objects in handheld mobile AR. The results of our study indicate that bright colors are perceived as nearer than dark colors for a high-fidelity, simple 3D object, regardless of hue. Additionally, bright red is perceived as nearer than any other color. These effects were not observed for a low-fidelity version of the simple object or for a more-complex 3D object. High-fidelity objects had more perceptual differences than low-fidelity objects, indicating that fidelity interacts with color and luminance to affect depth perception. These findings reveal how the properties of 3D models influence the effects of color and luminance on depth perception in handheld mobile AR and can help developers select colors for their applications.Comment: 9 pages, In proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window

    The Effect of an Occluder on the Accuracy of Depth Perception in Optical See-Through Augmented Reality

    Get PDF
    Three experiments were conducted to study the effect of an occluder on the accuracy of nearield depth perception in optical-see-through augmented reality (AR). The first experiment was a duplicate experiment of the one in Edwards et al. [2004]. We found more accurate results than Edwards et al.’s work and did not find the occluder’s main effect or its two-way interaction effect with distance on the accuracy of observers’ depth matching. The second experiment was an updated version of the first one using a within-subject design and a more accurate calibration method. The results were that errors ranged from –5 to 3 mm when the occluder was present, –3 to 2 mm when the occluder was absent, and observers judged the virtual object to be closer after the presentation of the occluder. The third experiment was conducted on three subjects who were depth perception researchers. The result showed significant individual effects

    Design, Assembly, Calibration, and Measurement of an Augmented Reality Haploscope

    Full text link
    A haploscope is an optical system which produces a carefully controlled virtual image. Since the development of Wheatstone's original stereoscope in 1838, haploscopes have been used to measure perceptual properties of human stereoscopic vision. This paper presents an augmented reality (AR) haploscope, which allows the viewing of virtual objects superimposed against the real world. Our lab has used generations of this device to make a careful series of perceptual measurements of AR phenomena, which have been described in publications over the previous 8 years. This paper systematically describes the design, assembly, calibration, and measurement of our AR haploscope. These methods have been developed and improved in our lab over the past 10 years. Despite the fact that 180 years have elapsed since the original report of Wheatstone's stereoscope, we have not previously found a paper that describes these kinds of details.Comment: Accepted and presented at the IEEE VR 2018 Workshop on Perceptual and Cognitive Issues in AR (PERCAR); pre-print versio

    Efficient Distance Accuracy Estimation Of Real-World Environments In Virtual Reality Head-Mounted Displays

    Get PDF
    Virtual reality (VR) is a very promising technology with many compelling industrial applications. As many advancements have been made recently to deploy and use VR technology in virtual environments, they are still less mature to be used to render real environments. The current VR systems settings, which are developed for virtual environments rendering, fail to adequately address the challenges of capturing and displaying real-world virtual reality that these systems entail. Before these systems can be used in real life settings, their performance needs to be investigated, more specifically, depth perception and how distances to objects in the rendered scenes are estimated. The perceived depth is influenced by Head Mounted Displays (HMD) that inevitability decrease the virtual content’s depth perception. Distances are consistently underestimated in virtual environments (VEs) compared to the real world. The reason behind this underestimation is still not understood. This thesis investigates another version of this kind of system, that to the best of authors knowledge has not been explored by any previous research. Previous research used a computer-generated scene. This work is examining distance estimation in real environments rendered to Head-Mounted Displays, where distance estimations is among the most challenging issues that are still investigated and not fully understood.This thesis introduces a dual-camera video feed system through a virtual reality head mounted display with two models: a video-based and a static photo-based model, in which, the purpose is to explore whether the misjudgment of distances in HMDs could be due to a lack of realism, or not, with the use of a real-world scene rendering system. Distance judgments performance in the real world and these two evaluated VE models were compared using protocols already proven to accurately measure real-world distance estimations. An improved model based on enhancing the field of view (FOV) of the displayed scenes to improve distance judgements when displaying real-world VR content to HMDs was developed; allowing to mitigate the limited FOV, which is among the first potential causes of distance underestimation, specially, the mismatch of FOV between the camera and the HMD field of views. The proposed model is using a set of two cameras to generate the video instead of hundreds of input cameras or tens of cameras mounted on a circular rig as previous works from the literature. First Results from the first implementation of this system found that when the model was rendered as static photo-based, the underestimation was less as compared with the live video feed rendering. The video-based (real + HMD) model and the static photo-based (real + photo + HMD) model averaged 80.2% of the actual distance, and 81.4% respectively compared to the Real-World estimations that averaged 92.4%. The improved developed approach (Real + HMD + FOV) was compared to these two models and showed an improvement of 11%, increasing the estimation accuracy from 80% to 91% and reducing the estimation error from 1.29% to 0.56%. This thesis results present strong evidence of the need for novel distance estimation improvements methods for real world VR content systems and provides effective initial work towards this goal

    Head Mounted Display Interaction Evaluation: Manipulating Virtual Objects in Augmented Reality

    Get PDF
    Augmented Reality (AR) is getting close to real use cases,which is driving the creation of innovative applications and the unprecedented growth of Head-Mounted Display (HMD) devices in consumer availability. However, at present there is a lack of guidelines, common form factors and standard interaction paradigms between devices, which has resulted in each HMD manufacturer creating their own specifications. This paper presents the first experimental evaluation of two AR HMDs evaluating their interaction paradigms, namely we used the HoloLens v1 (metaphoric interaction) and Meta2 (isomorphic interaction). We report on precision, interactivity and usability metrics in an object manipulation task-based user study. 20 participants took part in this study and significant differences were found between interaction paradigms of the devices for move tasks, where the isomorphic mapped interaction outperformed the metaphoric mapped interaction in both time to completion and accuracy, while the contrary was found for the resize task. From an interaction perspective, the isomorphic mapped interaction (using the Meta2) was perceived as more natural and usable with a significantly higher usability score and a significantly lower task-load index. However, when task accuracy and time to completion is key mixed interaction paradigms need to be considered

    Perceived location of virtual content measurement method in optical see through augmented reality

    Get PDF
    An important research question for optical see through AR is, “how accurately and precisely can a virtual object’s perceived location be measured in three dimensional space?” Previously, a method was developed for measuring the perceived 3D location of virtual objects using Microsoft HoloLens1 display. This study found an unexplained rightward perceptual bias on horizontal plane; most participants were right eye dominant, and consistent with the hypothesis that perceived location is biased in eye dominance direction. In this thesis, a replication study is reported, which includes binocular and monocular viewing conditions, recruits an equal number of left and right eye dominant participants, uses Microsoft HoloLens2 display. This replication study examined whether the perceived location of virtual objects is biased in the direction of dominant eye. Results suggest that perceived location is not biased in the direction of dominant eye. Compared to previous study’s findings, overall perceptual accuracy increased, and precision was similar
    • …
    corecore