114 research outputs found

    Distance Perception in Virtual Environment through Head-mounted Displays

    Get PDF
    Head-mounted displays (HMDs) are popular and affordable wearable display devices which facilitate immersive and interactive viewing experience. Numerous studies have reported that people typically underestimate distances in HMDs. This dissertation describes a series of research experiments that examined the influence of FOV and peripheral vision on distance perception in HMDs and attempts to provide useful information to HMD manufacturers and software developers to improve perceptual performance of HMD-based virtual environments. This document is divided into two main parts. The first part describes two experiments that examined distance judgments in Oculus Rift HMDs. Unlike numerous studies found significant distance compression, our Experiment I & II using the Oculus DK1 and DK2 found that people could judge distances near-accurately between 2 to 5 meters. In the second part of this document, we describe four experiments that examined the influence of FOV and human periphery on distance perception in HMDs and explored some potential approaches of augmenting peripheral vision in HMDs. In Experiment III, we reconfirmed the peripheral stimulation effect found by Jones et al. using bright peripheral frames. We also discovered that there is no linear correlation between the stimulation and peripheral brightness. In Experiment IV, we examined the interaction between the peripheral brightness and distance judgments using peripheral frames with different relative luminances. We found that there exists a brightness threshold; i.e., a minimum brightness level that\u27s required to trigger the peripheral stimulation effect which improves distance judgments in HMD-based virtual environments. In Experiment V, we examined the influence of applying a pixelation effect in the periphery which simulates the visual experience of having a peripheral low-resolution display around viewports. The result showed that adding the pixelated peripheral frame significantly improves distance judgments in HMDs. Lastly, our Experiment VI examined the influence of image size and shape in HMDs on distance perception. We found that making the frame thinner to increase the FOV of imagery improves the distance judgments. The result supports the hypothesis that FOV influences distance judgments in HMDs. It also suggests that the image shape may have no influence on distance judgments in HMDs

    A perceptual calibration method to ameliorate the phenomenon of non-size-constancy in hetereogeneous VR displays

    Get PDF
    The interception of the action-perception loop in virtual reality [VR] causes that understanding the effects of different display factors in spatial perception becomes a challenge. For example, studies have reported that there is not size-constancy, the perceived size of an object does not remain constant as its distance increases. This phenomenon is closely related to the reports of underestimation of distances in VR, which causes remain unclear. Despite the efforts improving the spatial cues regarding display technology and computer graphics, some interest has started to focus on the human side. In this study, we propose a perceptual calibration method which can ameliorate the effects of non-size-constancy in heterogeneous VR displays. The method was validated in a perceptual matching experiment comparing the performance between an HTC Vive HMD and a four-walls CAVE system. Results show that perceptual calibration based on interpupillary distance increments can solve partially the phenomenon of non-size-constancy in VR

    Improving everyday computing tasks with head-mounted displays

    Get PDF
    The proliferation of consumer-affordable head-mounted displays (HMDs) has brought a rash of entertainment applications for this burgeoning technology, but relatively little research has been devoted to exploring its potential home and office productivity applications. Can the unique characteristics of HMDs be leveraged to improve users’ ability to perform everyday computing tasks? My work strives to explore this question. One significant obstacle to using HMDs for everyday tasks is the fact that the real world is occluded while wearing them. Physical keyboards remain the most performant devices for text input, yet using a physical keyboard is difficult when the user can’t see it. I developed a system for aiding users typing on physical keyboards while wearing HMDs and performed a user study demonstrating the efficacy of my system. Building on this foundation, I developed a window manager optimized for use with HMDs and conducted a user survey to gather feedback. This survey provided evidence that HMD-optimized window managers can provide advantages that are difficult or impossible to achieve with standard desktop monitors. Participants also provided suggestions for improvements and extensions to future versions of this window manager. I explored the issue of distance compression, wherein users tend to underestimate distances in virtual environments relative to the real world, which could be problematic for window managers or other productivity applications seeking to leverage the depth dimension through stereoscopy. I also investigated a mitigation technique for distance compression called minification. I conducted multiple user studies, providing evidence that minification makes users’ distance judgments in HMDs more accurate without causing detrimental perceptual side effects. This work also provided some valuable insight into the human perceptual system. Taken together, this work represents valuable steps toward leveraging HMDs for everyday home and office productivity applications. I developed functioning software for this purpose, demonstrated its efficacy through multiple user studies, and also gathered feedback for future directions by having participants use this software in simulated productivity tasks

    Object-based attentional expectancies in virtual reality

    Get PDF
    Modern virtual reality (VR) technology has the promise to enable neuroscientists and psychologists to conduct ecologically valid experiments, while maintaining precise experimental control. However, in recent studies, game engines like Unreal Engine or Unity, are used for stimulus creation and data collection. Yet game engines do not provide the underlying architecture to measure the time of stimulus events and behavioral input with the accuracy or precision required by many experiments. Furthermore, it is currently not well understood, if VR and the underlying technology engages the same cognitive processes as a comparable real-world situation. Similarly, not much is known, if experimental findings obtained in a standard monitor-based experiment, are comparable to those obtained in VR by using a head-mounted display (HMD) or if the different stimulus devices also engage different cognitive processes. The aim of my thesis was to investigate if modern HMDs affect the early processing of basic visual features differently than a standard computer monitor. In the first project (chapter 1), I developed a new behavioral paradigm, to investigate how prediction errors of basic object features are processed. In a series of four experiments, the results consistently indicated that simultaneous prediction errors for unexpected colors and orientations are processed independently on an early level of processing, before object binding comes into play. My second project (chapter 2) examined the accuracy and precision of stimulus timing and reaction time measurements, when using Unreal Engine 4 (UE4) in combination with a modern HMD system. My results demonstrate that stimulus durations can be defined and controlled with high precision and accuracy. However, reaction time measurements turned out to be highly imprecise and inaccurate, when using UE4’s standard application programming interface (API). Instead, I proposed a new software-based approach to circumvent these limitations. Timings benchmarks confirmed that the method can measure reaction times with a precision and accuracy in the millisecond range. In the third project (chapter 3), I directly compared the task performance in the paradigm developed in chapter 1 between the original experimental setup and a virtual reality simulation of this experiment. To establish two identical experimental setups, I recreated the entire physical environment in which the experiments took place within VR and blended the virtual replica over the physical lab. As a result, the virtual environment (VE) corresponded not only visually with the physical laboratory but also provided accurate sensory properties of other modalities, such as haptic or acoustic feedback. The results showed a comparable task performance in both the non-VR and the VR experiments, suggesting that modern HMDs do not affect early processing of basic visual features differently than a typical computer monitor

    CHARACTERISTICS OF HEAD MOUNTED DISPLAYS AND THEIR EFFECTS ON SIMULATOR SICKNESS

    Get PDF
    Characteristics of head-mounted displays (HMDs) and their effects on simulator sickness (SS) and presence were investigated. Update delay and wide field of views (FOV) have often been thought to elicit SS. With the exception of Draper et al. (2001), previous research that has examined FOV has failed to consider image scale factor, or the ratio between physical FOV of the HMD display and the geometric field of view (GFOV) of the virtual environment (VE). The current study investigated update delay, image scale factor, and peripheral vision on SS and presence when viewing a real-world scene. Participants donned an HMD and performed active head movements to search for objects located throughout the laboratory. Seven out of the first 28 participants withdrew from the study due to extreme responses. These participants experienced faint-like symptoms, confusion, ataxia, nausea, and tunnel vision. Thereafter, the use of a hand-rail was implemented to provide participants something to grasp while performing the experimental task. The 2X2X2 ANOVA revealed a main effect of peripheral vision, F(1,72) = 6.90, p= .01, indicating peak Simulator Sickness Questionnaire (SSQ) scores were significantly higher when peripheral vision was occluded than when peripheral vision was included. No main effects or interaction effects were revealed on Presence Questionnaire (PQ version 4.0) scores. However, a significant negative correlation of peak SSQ scores and PQ scores, r(77) = -.28, p= .013 was revealed. Participants also were placed into \u27sick\u27 and \u27not-sick\u27 groups based on a median split of SSQ scores. A chi-square analysis revealed that participants who were exposed to an additional update delay of ~200 ms were significantly more likely to be in the \u27sick\u27 group than those who were exposed to no additional update delay. To reduce the occurrence of SS, a degree of peripheral vision of the external world should be included and attempts to reduce update delay should continue. Furthermore, participants should be provided with something to grasp while in an HMD VE. Future studies should seek to investigate a critical amount of peripheral vision and update delay necessary to elicit SS

    Distance Perception Through Head-Mounted Displays

    Get PDF
    It has been shown in numerous research studies that people tend to underestimate distances while wearing head-mounted displays (HMDs). We investigated various possible factors affecting the perception of distance is HMDs through multiple studies. Many contributing factors has been identified by researchers in the past decades, however, further investigation is required to provide a better understanding of this problem. In order to find a baseline for distance underestimation, we performed a study to compare the distance perception in real world versus a fake headset versus a see-through HMD. Users underestimated distances while wearing the fake headset or the see-through HMD. The fake headset and see-through HMD had similar result, while they had significant difference with the real-world results. Since the fake headset and the HMD had similar underestimation results, we decided to focus on the FOV of the headset which was a common factor between these two conditions. To understand the effects of FOV on the perception of distance in a virtual environment we performed a study through a blind-throwing task. FOVs at three different diagonal angles, 60°, 110° and 200° were compared with each other. The results showed people underestimate the distances more in restricted FOVs. As this study was performed using static 360° images of a single environment, we decided to see if the results can be extended to various 3D environments. A mixed-design study to compare the effect of horizontal FOV and vertical FOV on egocentric distance perception in four different realistic VEs was performed. The results indicated more accurate distance judgement with larger horizontal FOV with no significant effect of vertical FOV. More accurate distance judgement in indoor VEs compared to outdoor VEs was observed. Also, participants judged distances more accurately in cluttered environments versus uncluttered environments. These results highlights the importance of the environment in distance-critical VR applications and also shows that wider horizontal FOV should be considered for an improved distance judgment

    Scene-motion- and latency-perception thresholds for head-mounted displays

    Get PDF
    A fundamental task of an immersive virtual environment (IVE) system is to present images of the virtual world that change appropriately as the user's head moves. Current IVE systems, especially those using head-mounted displays (HMDs), often produce spatially unstable scenes, resulting in simulator sickness, degraded task performance, degraded visual acuity, and breaks in presence. In HMDs, instability resulting from latency is greater than all other causes of instability combined. The primary way users perceive latency in an HMD is by improper motion of scenes that should be stationary in the world. Whereas latency-induced scene motion is well defined mathematically, less is understood about how much scene motion and/or latency can occur without subjects noticing, and how this varies under different conditions. I built a simulated HMD system with zero effective latency---no scene motion occurs due to latency. I intentionally and artificially inserted scene motion into the virtual environment in order to determine how much scene motion and/or latency can occur without subjects noticing. I measured perceptual thresholds of scene-motion and latency under different conditions across five experiments. Based on the study of latency, head motion, scene motion, and perceptual thresholds, I developed a mathematical model of latency thresholds as an inverse function of peak head-yaw acceleration. Psychophysics studies showed that measured latency thresholds correlate with this inverse function better than with a linear function. The work reported here readily enables scientists and engineers to, under their particular conditions, measure latency thresholds as a function of head motion by using an off-the-shelf projector system. Latency requirements can thus be determined before designing HMD systems

    THE IMPACT OF PRE-EXPERIMENT WALKING ON DISTANCE PERCEPTION IN VR

    Get PDF
    While individuals can accurately estimate distances in the real world, this ability is often diminished in virtual reality (VR) simulations, hampering performance across training, entertainment, prototyping, and education domains. To assess distance judgments, the direct blind walking method—having participants walk blindfolded to targets—is frequently used. Typically, direct blind walking measurements are performed after an initial practice phase, where people become comfortable with walking while blindfolded. Surprisingly, little research has explored how such pre-experiment walking impacts subsequent VR distance judgments. Our initial investigation revealed increased pre-experiment blind walking reduced distance underestimations, underscoring the importance of detailing these preparatory procedures in research—details often overlooked. In a follow-up study, we found that eyes-open walking prior to pre-experiment blind walking did not influence results, while extensive pre-experiment blind walking led to overestimation. Additionally, see-through walking had a slightly greater impact and less underestimation compared to one loop of pre-experiment blind walking. Our comprehensive research deepens our understanding of how pre-experiment methodologies influence distance judgments in VR, guides future research protocols, and elucidates the mechanics of distance estimation within virtual reality

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window
    • …
    corecore