4 research outputs found

    Comparison of Two Methods for Improving Distance Perception in Virtual Reality

    Get PDF
    Distance is commonly underperceived in virtual environments (VEs) compared to real environments. Past work suggests that displaying a replica VE based on the real surrounding environment leads to more accurate judgments of distance, but that work has lacked the necessary control conditions to firmly make this conclusion. Other research indicates that walking through a VE with visual feedback improves judgments of distance and size. This study evaluated and compared those two methods for improving perceived distance in VEs. All participants experienced a replica VE based on the real lab. In one condition, participants visually previewed the real lab prior to experiencing the replica VE, and in another condition they did not. Participants performed blind-walking judgments of distance and also judgments of size in the replica VE before and after walking interaction. Distance judgments were more accurate in the preview compared to no preview condition, but size judgments were unaffected by visual preview. Distance judgments and size judgments increased after walking interaction, and the improvement was larger for distance than for size judgments. After walking interaction, distance judgments did not differ based on visual preview, and walking interaction led to a larger improvement in judged distance than did visual preview. These data suggest that walking interaction may be more effective than visual preview as a method for improving perceived space in a VE

    Effects of Avatar Hand-size Modifications on Size Judgments of Familiar and Abstract Objects in Virtual Reality

    Get PDF
    University of Minnesota M.S. thesis.June 2019. Major: Computer Science. Advisor: Peter Willemsen. 1 computer file (PDF); vii, 56 pages.Many research studies have investigated spatial understanding within virtual envi- ronments, ranging from distance estimation, size judgments, and perception of scale. Eventually, this knowledge will help us to create virtual environments that better match our spatial abilities within natural environments. To further understand how people interpret the size of virtual objects, we present an experiment that utilizes a proprioceptive-based size estimation measure designed to elicit a three-dimensional judgment of an object’s size using a box-sizing task. Participants viewed both ab- stract and familiar objects presented within action-space in a virtual environment and were asked to make an axis-aligned box the same size as the object they previously observed. A between-subjects manipulation modified a participant’s avatar hand size to be either 80%, 100% or 120% of their measured hand size. Results indicate that the avatar hand size manipulation scales various factors of these size judgments in the three dimensions. Additionally, whether an object was abstract or a familiar size object produced distinctly different size judgments

    Object-based attentional expectancies in virtual reality

    Get PDF
    Modern virtual reality (VR) technology has the promise to enable neuroscientists and psychologists to conduct ecologically valid experiments, while maintaining precise experimental control. However, in recent studies, game engines like Unreal Engine or Unity, are used for stimulus creation and data collection. Yet game engines do not provide the underlying architecture to measure the time of stimulus events and behavioral input with the accuracy or precision required by many experiments. Furthermore, it is currently not well understood, if VR and the underlying technology engages the same cognitive processes as a comparable real-world situation. Similarly, not much is known, if experimental findings obtained in a standard monitor-based experiment, are comparable to those obtained in VR by using a head-mounted display (HMD) or if the different stimulus devices also engage different cognitive processes. The aim of my thesis was to investigate if modern HMDs affect the early processing of basic visual features differently than a standard computer monitor. In the first project (chapter 1), I developed a new behavioral paradigm, to investigate how prediction errors of basic object features are processed. In a series of four experiments, the results consistently indicated that simultaneous prediction errors for unexpected colors and orientations are processed independently on an early level of processing, before object binding comes into play. My second project (chapter 2) examined the accuracy and precision of stimulus timing and reaction time measurements, when using Unreal Engine 4 (UE4) in combination with a modern HMD system. My results demonstrate that stimulus durations can be defined and controlled with high precision and accuracy. However, reaction time measurements turned out to be highly imprecise and inaccurate, when using UE4’s standard application programming interface (API). Instead, I proposed a new software-based approach to circumvent these limitations. Timings benchmarks confirmed that the method can measure reaction times with a precision and accuracy in the millisecond range. In the third project (chapter 3), I directly compared the task performance in the paradigm developed in chapter 1 between the original experimental setup and a virtual reality simulation of this experiment. To establish two identical experimental setups, I recreated the entire physical environment in which the experiments took place within VR and blended the virtual replica over the physical lab. As a result, the virtual environment (VE) corresponded not only visually with the physical laboratory but also provided accurate sensory properties of other modalities, such as haptic or acoustic feedback. The results showed a comparable task performance in both the non-VR and the VR experiments, suggesting that modern HMDs do not affect early processing of basic visual features differently than a typical computer monitor

    Comparison of Two Methods for Improving Distance Perception in Virtual Reality

    No full text
    Distance is commonly underperceived in virtual environments (VEs) compared to real environments. Past work suggests that displaying a replica VE based on the real surrounding environment leads to more accurate judgments of distance, but that work has lacked the necessary control conditions to firmly make this conclusion. Other research indicates that walking through a VE with visual feedback improves judgments of distance and size. This study evaluated and compared those two methods for improving perceived distance in VEs. All participants experienced a replica VE based on the real lab. In one condition, participants visually previewed the real lab prior to experiencing the replica VE, and in another condition they did not. Participants performed blind-walking judgments of distance and also judgments of size in the replica VE before and after walking interaction. Distance judgments were more accurate in the preview compared to no preview condition, but size judgments were unaffected by visual preview. Distance judgments and size judgments increased after walking interaction, and the improvement was larger for distance than for size judgments. After walking interaction, distance judgments did not differ based on visual preview, and walking interaction led to a larger improvement in judged distance than did visual preview. These data suggest that walking interaction may be more effective than visual preview as a method for improving perceived space in a VE.This article is published as Jonathan W. Kelly, Lucia A. Cherep, Brenna Klesel, Zachary D. Siegel, and Seth George. 2018. Comparison of Two Methods for Improving Distance Perception in Virtual Reality. ACM Trans. Appl. Percept. 15, 2, Article 11 (March 2018), 11 pages. doi: 10.1145/3165285. Posted with permission.</p
    corecore