1,185 research outputs found

    Peripheral Visual Information and Its Effect on the Perception of Egocentric Depth in Virtual and Augmented Environments

    Get PDF
    ABSTRACT A frequently observed problem in virtual environments is the underestimation of egocentric depth. This problem has been described numerous times and with widely varying degrees of severity. Though there has been considerable progress made in modifying observer behavior to compensate for these misperceptions, the question of why these errors exist is still an open issue. The study detailed in this document presents the preliminary findings of a large, between-subjects experiment (N=98) that attempts to identify and quantify the source of a pattern of adaptation and improved accuracy in the absence of explicit feedback found in Jones et al

    Peripheral visual cues and their effect on the perception of egocentric depth in virtual and augmented environments

    Get PDF
    The underestimation of depth in virtual environments at mediumield distances is a well studied phenomenon. However, the degree by which underestimation occurs varies widely from one study to the next, with some studies reporting as much as 68% underestimation in distance and others with as little as 6% (Thompson et al. [38] and Jones et al. [14]). In particular, the study detailed in Jones et al. [14] found a surprisingly small underestimation effect in a virtual environment (VE) and no effect in an augmented environment (AE). These are highly unusual results when compared to the large body of existing work in virtual and augmented distance judgments [16, 31, 36–38, 40–43]. The series of experiments described in this document attempted to determine the cause of these unusual results. Specifically, Experiment I aimed to determine if the experimental design was a factor and also to determine if participants were improving their performance throughout the course of the experiment. Experiment II analyzed two possible sources of implicit feedback in the experimental procedures and identified visual information available in the lower periphery as a key source of feedback. Experiment III analyzed distance estimation when all peripheral visual information was eliminated. Experiment IV then illustrated that optical flow in a participant’s periphery is a key factor in facilitating improved depth judgments in both virtual and augmented environments. Experiment V attempted to further reduce cues in the periphery by removing a strongly contrasting white surveyor’s tape from the center of the hallway, and found that participants continued to significantly adapt even when given very sparse peripheral cues. The final experiment, Experiment VI, found that when participants’ views are restricted to the field-of-view of the screen area on the return walk, adaptation still occurs in both virtual and augmented environments

    Distance Perception in Virtual Environment through Head-mounted Displays

    Get PDF
    Head-mounted displays (HMDs) are popular and affordable wearable display devices which facilitate immersive and interactive viewing experience. Numerous studies have reported that people typically underestimate distances in HMDs. This dissertation describes a series of research experiments that examined the influence of FOV and peripheral vision on distance perception in HMDs and attempts to provide useful information to HMD manufacturers and software developers to improve perceptual performance of HMD-based virtual environments. This document is divided into two main parts. The first part describes two experiments that examined distance judgments in Oculus Rift HMDs. Unlike numerous studies found significant distance compression, our Experiment I & II using the Oculus DK1 and DK2 found that people could judge distances near-accurately between 2 to 5 meters. In the second part of this document, we describe four experiments that examined the influence of FOV and human periphery on distance perception in HMDs and explored some potential approaches of augmenting peripheral vision in HMDs. In Experiment III, we reconfirmed the peripheral stimulation effect found by Jones et al. using bright peripheral frames. We also discovered that there is no linear correlation between the stimulation and peripheral brightness. In Experiment IV, we examined the interaction between the peripheral brightness and distance judgments using peripheral frames with different relative luminances. We found that there exists a brightness threshold; i.e., a minimum brightness level that\u27s required to trigger the peripheral stimulation effect which improves distance judgments in HMD-based virtual environments. In Experiment V, we examined the influence of applying a pixelation effect in the periphery which simulates the visual experience of having a peripheral low-resolution display around viewports. The result showed that adding the pixelated peripheral frame significantly improves distance judgments in HMDs. Lastly, our Experiment VI examined the influence of image size and shape in HMDs on distance perception. We found that making the frame thinner to increase the FOV of imagery improves the distance judgments. The result supports the hypothesis that FOV influences distance judgments in HMDs. It also suggests that the image shape may have no influence on distance judgments in HMDs

    The Effects Of Differing Optical Stimuli On Depth Perception In Virtual Reality

    Get PDF
    It is well documented that egocentric depth perception is underestimated in virtual reality more often than not. Many studies have been done to try and understand why this underestimation happens and what variables affect it. While this underestimation can be shown consistently the degree of underestimation can strongly differ from study to study, with as much as 68% to as low as 6% underestimation, Jones et al. (2011, 2008); Knapp(1999); Richardson and Waller (2007). Many of these same studies use blind walking as a tool to measure depth perception. With no standardized blind walking method for virtual reality existing differing blind walking methods may cause differing results. This thesis will explore how small changes in the blind walking procedure affect depth perception. Specifically, we will be examining procedures that alter the amount of ambient light that is visible to an observer after performing a blind walk

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation

    Full text link
    Pre-captured immersive environments using omnidirectional cameras provide a wide range of virtual reality applications. Previous research has shown that manipulating the eye height in egocentric virtual environments can significantly affect distance perception and immersion. However, the influence of eye height in pre-captured real environments has received less attention due to the difficulty of altering the perspective after finishing the capture process. To explore this influence, we first propose a pilot study that captures real environments with multiple eye heights and asks participants to judge the egocentric distances and immersion. If a significant influence is confirmed, an effective image-based approach to adapt pre-captured real-world environments to the user's eye height would be desirable. Motivated by the study, we propose a learning-based approach for synthesizing novel views for omnidirectional images with altered eye heights. This approach employs a multitask architecture that learns depth and semantic segmentation in two formats, and generates high-quality depth and semantic segmentation to facilitate the inpainting stage. With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation. Quantitative and qualitative evaluation shows favorable results against state-of-the-art methods, and an extensive user study verifies improved perception and immersion for pre-captured real-world environments.Comment: 10 pages, 13 figures, 3 tables, submitted to ISMAR 202

    A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays

    Get PDF
    We identify usability challenges facing consumers adopting Virtual Reality (VR) head-mounted displays (HMDs) in a survey of 108 VR HMD users. Users reported significant issues in interacting with, and being aware of their real-world context when using a HMD. Building upon existing work on blending real and virtual environments, we performed three design studies to address these usability concerns. In a typing study, we show that augmenting VR with a view of reality significantly corrected the performance impairment of typing in VR. We then investigated how much reality should be incorporated and when, so as to preserve users’ sense of presence in VR. For interaction with objects and peripherals, we found that selectively presenting reality as users engaged with it was optimal in terms of performance and users’ sense of presence. Finally, we investigated how this selective, engagement-dependent approach could be applied in social environments, to support the user’s awareness of the proximity and presence of others
    • …
    corecore