183 research outputs found

    The Underestimation Of Egocentric Distance: Evidence From Frontal Matching Tasks

    Get PDF
    There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues

    The Effect of Anthropometric Properties of Self-Avatars on Action Capabilities in Virtual Reality

    Get PDF
    The field of Virtual Reality (VR) has seen a steady exponential uptake in the last decade and is being continuously incorporated into areas of popular interest like healthcare, training, recreation and gaming. This steady upward trend and prolonged popularity has resulted in numerous extravagant virtual environments, some that aim to mimic real-life experiences like combat training, while others intend to provide unique experiences that may otherwise be difficult to recreate like flying over ancient Egypt as a bird. These experiences often showcase highly realistic graphics, intuitive interactions and unique avatar embodiment scenarios with the help of various tracking sensors, high definition graphic displays, sound systems, etc. The literature suggests that estimates and affordance judgments in VR scenarios such as the ones described above are affected by the properties and the nature of the avatar embodied by the user. Therefore, to provide users with the finest experiences it is crucial to understand the interaction between the embodied self and the action capabilities afforded by it in the surrounding virtual environment. In a series of studies aimed at exploring the effect of gender matched body-scaled self-avatars on the user\u27s perception, we investigate the effect of self-avatars on the perception of size of objects in an immersive virtual environment (IVE) and how this perception affects the actions one can perform as compared to the real world. In the process, we make use of newer tracking technology and graphic displays to investigate the perceived differences between real world environments and their virtual counterparts to understand how the spatial properties of the environment and the embodied self-avatars affect affordances by means of passability judgments. We describe techniques for creation and mapping VR environments onto their real world counterparts and the creation of gender matched body-scaled self-avatars that provides real time full-body tracking. The first two studies investigate how newer graphical displays and off-the-shelf tracking devices can be utilized to create salient gender matched body-scaled self-avatars and their effect on the judgment of passability as a result of the embodied body schema. The study involves creating complex scripts that automate the process of mapping virtual worlds onto their real world counterparts within a 1cm margin of error and the creation of self-avatars that match height, limb proportions and shoulder width of the participant using tracking sensors. The experiment involves making judgments about the passability of an adjustable doorway in the real world and in a virtual to-scale replica of the real world environment. The results demonstrated that the perception of affordances in IVEs is comparable to the real world but the behavior leading to it differs in VR. Also, the body-scaled self-avatars generated provide salient information yielding performance similar to the real world. Several insights and guidelines related to creating veridical virtual environments and realistic self-avatars were achieved from this effort. The third study investigates how the presence of body-scaled self-avatars affects the perception of size of virtual handheld objects and the influence of the person-plus-virtual-object system created by lifting the said virtual object on passability. This is crucial to understand as VR simulations now often utilize self-avatars that carry objects while maneuvering through the environment. How they interact with these handheld objects can influence what they do in critical scenarios where split second decisions can change the outcome like combat training, role-playing games, first person shooting, thrilling rides, physiotherapy, etc. It has also been reported that the avatar itself can influence the perception of size of virtual objects, in turn influencing action capabilities. There is ample research on different interaction techniques to manipulate objects in a virtual world but the question about how the objects affect our action capabilities upon interaction remains unanswered, especially when the haptic feedback associated with holding a real object is mismatched or missing. The study investigates this phenomenon by having participants interact with virtual objects of different sizes and making frontal and lateral passability judgments to an adjustable aperture similar to the first experiment. The results suggest that the presence of self-avatars significantly affects affordance judgments. Interestingly, frontal and lateral judgments in IVEs seem to similar unlike the real world. Investigating the concept of embodied body schema and its influence on action-capabilities further, the fourth study looks at how embodying self-avatars that may vary slightly from your real world body affect performance and behavior in dynamic affordance scenarios. In this particular study, we change the eye height of the participants in the presence or absence of self-avatars that are either bigger, smaller or the same size as the participant. We then investigate how this change in eye height and anthropometric properties of the self-avatar affects their judgments when crossing streets with oncoming traffic in virtual reality. We also evaluate any changes in the perceived walking speed as a result of embodying altered self-avatars. The findings suggest that the presence of self-avatars results in safer crossing behavior, however scaling the eye height or the avatar does not seem to affect the perceived walking speed. A detailed discussion on all the findings can be found in the manuscript

    Large Perceptual Distortions Of Locomotor Action Space Occur In Ground-Based Coordinates: Angular Expansion And The Large-Scale Horizontal-Vertical Illusion

    Get PDF
    What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved

    Blind Direct Walking Distance Judgment Research: A Best Practices Guide

    Get PDF
    Over the last 30 years, Virtual Reality (VR) research has shown that distance perception in VR is compressed as compared to the real world. The full reason for this is yet unknown. Though many experiments have been run to study the underlying reasons for this compression, often with similar procedures, the experimental details either show significant variation between experiments or go unreported. This makes it difficult to accurately repeat or compare experiments, as well as negatively impacts new researchers trying to learn and follow current best practices. In this paper, we present a review of past research and things that are typically left unreported. Using this and the practices of my advisor as evidence, we suggest a standard to assist researchers in performing quality research pertaining to blind direct walking distance judgments in VR

    Investigating Embodied Interaction in Near-Field Perception-Action Re-Calibration on Performance in Immersive Virtual Environments

    Get PDF
    Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users\u27 distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users\u27 reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users\u27 accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants\u27 physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant\u27s maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones\u27 sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user\u27s depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users\u27 interaction space depth perception in an IVE. Therefore, participants\u27 distance judgments would be improved after the calibration phase regardless of self-avatars\u27 visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world

    Effects of Clutter on Egocentric Distance Perception in Virtual Reality

    Full text link
    To assess the impact of clutter on egocentric distance perception, we performed a mixed-design study with 60 participants in four different virtual environments (VEs) with three levels of clutter. Additionally, we compared the indoor/outdoor VE characteristics and the HMD's FOV. The participants wore a backpack computer and a wide FOV head-mounted display (HMD) as they blind-walked towards three distinct targets at distances of 3m, 4.5m, and 6m. The HMD's field of view (FOV) was programmatically limited to 165{\deg}×\times110{\deg}, 110{\deg}×\times110{\deg}, or 45{\deg}×\times35{\deg}. The results showed that increased clutter in the environment led to more precise distance judgment and less underestimation, independent of the FOV. In comparison to outdoor VEs, indoor VEs showed more accurate distance judgment. Additionally, participants made more accurate judgements while looking at the VEs through wider FOVs.Comment: This paper was not published yet in any venue or conference/journal, ACM conference format was used for the paper, authors were listed in order from first to last (advisor), 10 pages, 10 figure

    Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation

    Full text link
    Pre-captured immersive environments using omnidirectional cameras provide a wide range of virtual reality applications. Previous research has shown that manipulating the eye height in egocentric virtual environments can significantly affect distance perception and immersion. However, the influence of eye height in pre-captured real environments has received less attention due to the difficulty of altering the perspective after finishing the capture process. To explore this influence, we first propose a pilot study that captures real environments with multiple eye heights and asks participants to judge the egocentric distances and immersion. If a significant influence is confirmed, an effective image-based approach to adapt pre-captured real-world environments to the user's eye height would be desirable. Motivated by the study, we propose a learning-based approach for synthesizing novel views for omnidirectional images with altered eye heights. This approach employs a multitask architecture that learns depth and semantic segmentation in two formats, and generates high-quality depth and semantic segmentation to facilitate the inpainting stage. With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation. Quantitative and qualitative evaluation shows favorable results against state-of-the-art methods, and an extensive user study verifies improved perception and immersion for pre-captured real-world environments.Comment: 10 pages, 13 figures, 3 tables, submitted to ISMAR 202

    The influence of the viewpoint in a self-avatar on body part and self-localization

    Get PDF
    The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at sev- eral of their body parts (body part localization) as well as "directly at you" (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the up- per torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required

    Visualization and (Mis)Perceptions in Virtual Reality

    No full text
    Virtual Reality (VR) technologies are now being widely adopted for use in areas as diverse as surgical and military training, architectural design, driving and flight simulation, psychotherapy, and gaming/entertainment. A large range of visual displays (from desktop monitors and head-mounted displays (HMDs) to large projection systems) are all currently being employed where each display technology offers unique advantages as well as disadvantages. In addition to technical considerations involved in choosing a VR interface, it is also critical to consider perceptual and psychophysical factors concerned with visual displays. It is now widely recognized that perceptual judgments of particular spatial properties are different in VR than in the real world. In this paper, we will provide a brief overview of what is currently known about the kinds of perceptual errors that can be observed in virtual environments (VEs). Subsequently we will outline the advantages and disadvantages of particular visual displays by foc using on the perceptual and behavioral constraints that are relevant for each. Overall, the main objective of this paper is to highlight the importance of understanding perceptual issues when evaluating different types of visual simulation in VEs
    • …
    corecore