16,858 research outputs found

    Near-Field Depth Perception in See-Through Augmented Reality

    Get PDF
    This research studied egocentric depth perception in an augmented reality (AR) environment. Specifically, it involved measuring depth perception in the near visual field by using quantitative methods to measure the depth relationships between real and virtual objects. This research involved two goals; first, engineering a depth perception measurement apparatus and related calibration andmeasuring techniques for collecting depth judgments, and second, testing its effectiveness by conducting an experiment. The experiment compared two complimentary depth judgment protocols: perceptual matching (a closed-loop task) and blind reaching (an open-loop task). It also studied the effect of a highly salient occluding surface; this surface appeared behind, coincident with, and in front of virtual objects. Finally, the experiment studied the relationship between dark vergence and depth perception

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    Using metacognitive monitoring feedback to improve student learning in augmented reality environments

    Get PDF
    This research aims to use metacognitive monitoring feedback to improve student learning performance in an augmented reality environment. In this study, Microsoft HoloLens, a prominent augmented reality device and independent mobile computer, provided a more realistic augmented reality environment to engineering students. The near field electromagnetic ranging system collected students' real-time location data when they experienced the augmented reality learning modules. In Phase 1, the study utilized one of the topics in the Ergonomic class, called manual material handling. The Phase 1 experiment results showed that retrospective confidence judgments in augmented reality modules could significantly influence the way students learn when the contents require a high level of spatial awareness during content learning. Therefore, Phase 2 research considered specific engineering education related to spatial recognition. For Phase 2, the location-based augmented reality system was developed to improve user interaction. The augmented reality learning module was biomechanics: one of the Ergonomic class problematic concepts to engineering students. This new location-based augmented reality system allowed students to immerse themselves in the studying process and improved student engagement of hands-on training in an augmented reality environment. Metacognitive monitoring feedback was another tool applied to improve students' learning performance. Student test scores, confidence level, answering time, and reviewing time were collected as metrics for performance assessment during the experiment. Overall, Phases 1 and 2 study outcomes advanced our understanding of students' interactions and the learning content in an augmented reality learning environment. This study also provided a guideline for how engineers need to develop valuable learning content in augmented reality 'environments. Furthermore, using a metacognitive monitoring feedback tool in an augmented reality learning environment is an effective strategy to improve students' academic performance and calibration.Includes bibliographical references (pages 93-108)

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window

    Efficient Distance Accuracy Estimation Of Real-World Environments In Virtual Reality Head-Mounted Displays

    Get PDF
    Virtual reality (VR) is a very promising technology with many compelling industrial applications. As many advancements have been made recently to deploy and use VR technology in virtual environments, they are still less mature to be used to render real environments. The current VR systems settings, which are developed for virtual environments rendering, fail to adequately address the challenges of capturing and displaying real-world virtual reality that these systems entail. Before these systems can be used in real life settings, their performance needs to be investigated, more specifically, depth perception and how distances to objects in the rendered scenes are estimated. The perceived depth is influenced by Head Mounted Displays (HMD) that inevitability decrease the virtual content’s depth perception. Distances are consistently underestimated in virtual environments (VEs) compared to the real world. The reason behind this underestimation is still not understood. This thesis investigates another version of this kind of system, that to the best of authors knowledge has not been explored by any previous research. Previous research used a computer-generated scene. This work is examining distance estimation in real environments rendered to Head-Mounted Displays, where distance estimations is among the most challenging issues that are still investigated and not fully understood.This thesis introduces a dual-camera video feed system through a virtual reality head mounted display with two models: a video-based and a static photo-based model, in which, the purpose is to explore whether the misjudgment of distances in HMDs could be due to a lack of realism, or not, with the use of a real-world scene rendering system. Distance judgments performance in the real world and these two evaluated VE models were compared using protocols already proven to accurately measure real-world distance estimations. An improved model based on enhancing the field of view (FOV) of the displayed scenes to improve distance judgements when displaying real-world VR content to HMDs was developed; allowing to mitigate the limited FOV, which is among the first potential causes of distance underestimation, specially, the mismatch of FOV between the camera and the HMD field of views. The proposed model is using a set of two cameras to generate the video instead of hundreds of input cameras or tens of cameras mounted on a circular rig as previous works from the literature. First Results from the first implementation of this system found that when the model was rendered as static photo-based, the underestimation was less as compared with the live video feed rendering. The video-based (real + HMD) model and the static photo-based (real + photo + HMD) model averaged 80.2% of the actual distance, and 81.4% respectively compared to the Real-World estimations that averaged 92.4%. The improved developed approach (Real + HMD + FOV) was compared to these two models and showed an improvement of 11%, increasing the estimation accuracy from 80% to 91% and reducing the estimation error from 1.29% to 0.56%. This thesis results present strong evidence of the need for novel distance estimation improvements methods for real world VR content systems and provides effective initial work towards this goal

    Peripheral visual cues and their effect on the perception of egocentric depth in virtual and augmented environments

    Get PDF
    The underestimation of depth in virtual environments at mediumield distances is a well studied phenomenon. However, the degree by which underestimation occurs varies widely from one study to the next, with some studies reporting as much as 68% underestimation in distance and others with as little as 6% (Thompson et al. [38] and Jones et al. [14]). In particular, the study detailed in Jones et al. [14] found a surprisingly small underestimation effect in a virtual environment (VE) and no effect in an augmented environment (AE). These are highly unusual results when compared to the large body of existing work in virtual and augmented distance judgments [16, 31, 36–38, 40–43]. The series of experiments described in this document attempted to determine the cause of these unusual results. Specifically, Experiment I aimed to determine if the experimental design was a factor and also to determine if participants were improving their performance throughout the course of the experiment. Experiment II analyzed two possible sources of implicit feedback in the experimental procedures and identified visual information available in the lower periphery as a key source of feedback. Experiment III analyzed distance estimation when all peripheral visual information was eliminated. Experiment IV then illustrated that optical flow in a participant’s periphery is a key factor in facilitating improved depth judgments in both virtual and augmented environments. Experiment V attempted to further reduce cues in the periphery by removing a strongly contrasting white surveyor’s tape from the center of the hallway, and found that participants continued to significantly adapt even when given very sparse peripheral cues. The final experiment, Experiment VI, found that when participants’ views are restricted to the field-of-view of the screen area on the return walk, adaptation still occurs in both virtual and augmented environments

    Head Mounted Display Interaction Evaluation: Manipulating Virtual Objects in Augmented Reality

    Get PDF
    Augmented Reality (AR) is getting close to real use cases,which is driving the creation of innovative applications and the unprecedented growth of Head-Mounted Display (HMD) devices in consumer availability. However, at present there is a lack of guidelines, common form factors and standard interaction paradigms between devices, which has resulted in each HMD manufacturer creating their own specifications. This paper presents the first experimental evaluation of two AR HMDs evaluating their interaction paradigms, namely we used the HoloLens v1 (metaphoric interaction) and Meta2 (isomorphic interaction). We report on precision, interactivity and usability metrics in an object manipulation task-based user study. 20 participants took part in this study and significant differences were found between interaction paradigms of the devices for move tasks, where the isomorphic mapped interaction outperformed the metaphoric mapped interaction in both time to completion and accuracy, while the contrary was found for the resize task. From an interaction perspective, the isomorphic mapped interaction (using the Meta2) was perceived as more natural and usable with a significantly higher usability score and a significantly lower task-load index. However, when task accuracy and time to completion is key mixed interaction paradigms need to be considered
    • …
    corecore