9 research outputs found

    Distance Perception in Virtual Environment through Head-mounted Displays

    Get PDF
    Head-mounted displays (HMDs) are popular and affordable wearable display devices which facilitate immersive and interactive viewing experience. Numerous studies have reported that people typically underestimate distances in HMDs. This dissertation describes a series of research experiments that examined the influence of FOV and peripheral vision on distance perception in HMDs and attempts to provide useful information to HMD manufacturers and software developers to improve perceptual performance of HMD-based virtual environments. This document is divided into two main parts. The first part describes two experiments that examined distance judgments in Oculus Rift HMDs. Unlike numerous studies found significant distance compression, our Experiment I & II using the Oculus DK1 and DK2 found that people could judge distances near-accurately between 2 to 5 meters. In the second part of this document, we describe four experiments that examined the influence of FOV and human periphery on distance perception in HMDs and explored some potential approaches of augmenting peripheral vision in HMDs. In Experiment III, we reconfirmed the peripheral stimulation effect found by Jones et al. using bright peripheral frames. We also discovered that there is no linear correlation between the stimulation and peripheral brightness. In Experiment IV, we examined the interaction between the peripheral brightness and distance judgments using peripheral frames with different relative luminances. We found that there exists a brightness threshold; i.e., a minimum brightness level that\u27s required to trigger the peripheral stimulation effect which improves distance judgments in HMD-based virtual environments. In Experiment V, we examined the influence of applying a pixelation effect in the periphery which simulates the visual experience of having a peripheral low-resolution display around viewports. The result showed that adding the pixelated peripheral frame significantly improves distance judgments in HMDs. Lastly, our Experiment VI examined the influence of image size and shape in HMDs on distance perception. We found that making the frame thinner to increase the FOV of imagery improves the distance judgments. The result supports the hypothesis that FOV influences distance judgments in HMDs. It also suggests that the image shape may have no influence on distance judgments in HMDs

    Simulation Of Virtual Reality Display Characteristics: A Method For The Evaluation Of Motion Perception

    Get PDF
    Visual perception in virtual reality devices is a widely researched topic. Many newer experiments compare their results to those of older studies that may have used equipment which is now outdated, which can cause perceptual differences. These differences in hardware can be simulated to a degree in software, provided the capabilities of the current hardware meet or exceed those of the older hardware. I present the HMD Simulation Framework, a software package for the Unity3D engine that allows for quick modification of many commonly researched HMD characteristics through the Inspector GUI built into Unity. I also describe a human subjects experiment aimed at identifying perceptual equivalence classes between different sets of headset characteristics. Unfortunately, due to the COVID-19 pandemic, all human subjects research was suspended for safety reasons, and I was unable to collect any data

    A perceptual calibration method to ameliorate the phenomenon of non-size-constancy in hetereogeneous VR displays

    Get PDF
    The interception of the action-perception loop in virtual reality [VR] causes that understanding the effects of different display factors in spatial perception becomes a challenge. For example, studies have reported that there is not size-constancy, the perceived size of an object does not remain constant as its distance increases. This phenomenon is closely related to the reports of underestimation of distances in VR, which causes remain unclear. Despite the efforts improving the spatial cues regarding display technology and computer graphics, some interest has started to focus on the human side. In this study, we propose a perceptual calibration method which can ameliorate the effects of non-size-constancy in heterogeneous VR displays. The method was validated in a perceptual matching experiment comparing the performance between an HTC Vive HMD and a four-walls CAVE system. Results show that perceptual calibration based on interpupillary distance increments can solve partially the phenomenon of non-size-constancy in VR

    Blind Direct Walking Distance Judgment Research: A Best Practices Guide

    Get PDF
    Over the last 30 years, Virtual Reality (VR) research has shown that distance perception in VR is compressed as compared to the real world. The full reason for this is yet unknown. Though many experiments have been run to study the underlying reasons for this compression, often with similar procedures, the experimental details either show significant variation between experiments or go unreported. This makes it difficult to accurately repeat or compare experiments, as well as negatively impacts new researchers trying to learn and follow current best practices. In this paper, we present a review of past research and things that are typically left unreported. Using this and the practices of my advisor as evidence, we suggest a standard to assist researchers in performing quality research pertaining to blind direct walking distance judgments in VR

    The Effects of Head-Centric Rest Frames on Egocentric Distance Perception in Virtual Reality

    Get PDF
    It has been shown through several research investigations that users tend to underestimate distances in virtual reality (VR). Virtual objects that appear close to users wearing a Head-mounted display (HMD) might be located at a farther distance in reality. This discrepancy between the actual distance and the distance observed by users in VR was found to hinder users from benefiting from the full in-VR immersive experience, and several efforts have been directed toward finding the causes and developing tools that mitigate this phenomenon. One hypothesis that stands out in the field of spatial perception is the rest frame hypothesis (RFH), which states that visual frames of reference (RFs), defined as fixed reference points of view in a virtual environment (VE), contribute to minimizing sensory mismatch. RFs have been shown to promote better eye-gaze stability and focus, reduce VR sickness, and improve visual search, along with other benefits. However, their effect on distance perception in VEs has not been evaluated. To explore and better understand the potential effects that RFs can have on distance perception in VR, we used a blind walking task to explore the effect of three head-centric RFs (a mesh mask, a nose, and a hat) on egocentric distance estimation. We performed a mixed-design study where we compared the effect of each of our chosen RFs across different environmental conditions and target distances in different 3D environments. We found that at near and mid-field distances, certain RFs can improve the user\u27s distance estimation accuracy and reduce distance underestimation. Additionally, we found that participants judged distance more accurately in cluttered environments compared to uncluttered environments. Our findings show that the characteristics of the 3D environment are important in distance estimation-dependent tasks in VR and that the addition of head-centric RFs, a simple avatar augmentation method, can lead to meaningful improvements in distance judgments, user experience, and task performance in VR

    User experience of architectural detailing in virtual urban environment

    Get PDF
    Architecture and urban design disciplines very much adhere to the use of representations as a tool to aid decision making process. As it is almost impossible to replicate environments in full-scale, both physical and digital representations are therefore restricted by the notions of scale and level of details. These notions are now challenged by the emergence of virtual reality (VR) technology, which allows architects to work with full-scale virtual environments (VEs). However, the taxonomy of architectural representations in VR is not properly defined as discussions in academia are mostly concerned about creating realistic impressions of space, rather than the operational side of different architectural detailing. Thus, in recognizing the operational dimensions of VEs in VR, it is vital to examine the influence of different architectural detailing on the legibility of VEs. This study aimed to suggest a guideline for users’ experience of architectural detailing in a VE for a large-scale urban simulation. This study was executed as an experimental simulation study. In a total of N=96 respondents were divided into four different treatments with n=24 respondents in each VE with a unique level of architectural detailing. They answered the questionnaire surveys and drew cognitive maps after completed navigating within the VEs using VR. Analysis methods used were primarily of content analysis, Kruskal-Wallis H test, and one-way ANOVA. The first analysis phase was environment-specific and the second phase was route and point-specific. In the third phase, the findings from previous phases were triangulated. The most and the least legible VEs were established as per different abilities of interpreting VEs. The operational dimensions of the VEs were established based on the deconstructed architectural detail components namely ‘geometric extrusion’ and ‘distinction’ as the factors influencing legibility of VEs. The operational dimensions of each VE were synthesized based on various criteria derived from the abilities of interpreting VEs. Based on the statistically significant results, the criteria were reduced to ‘understanding VE’ and ‘recalling VE’, in that order. In conclusion, there are some influences of architectural detailing on legibility but only in regards to the two criteria. The operational dimensions were also established for each criterion, which was learned from the cognitive knowledge data. Firstly, is for tasks within one viewpoint. Secondly, is for linear navigation and lastly is for full-fledged virtual exploration. This thesis also proposed two main guidelines for the user experience of architectural detailing in urban VE to be used by architects and users in the associated domain

    Minication affects action-based distance judgments in oculus rift HMDs

    No full text
    Distance perception is a crucial component for many virtual reality applications, and numerous studies have shown that egocentric distances are judged to be compressed in head-mounted display (HMD) systems. Geometric minification, a technique where the graphics are rendered with a field of view that larger than the HMD\u27s field of view, is one known method of eliminating the distance compression [Kuhl et al. 2009; Zhang et al. 2012]. This study uses direct blind walking to determine how minification might impact distance judgments in the Oculus Rift HMD which has a significantly larger FOV than previous minification studies. Our results show that people were able to make accurate distance judgments in a calibrated condition and that geometric minification causes people to overestimate distances. Since this study shows that minification can impact wide FOV displays such as the Oculus, we discuss how it may be necessary to use calibration techniques which are more thorough than those described in this paper. © 2014 ACM
    corecore