11 research outputs found

    Recalibration of Perceived Distance in Virtual Environments Occurs Rapidly and Transfers Asymmetrically Across Scale

    Get PDF
    Distance in immersive virtual reality is commonly underperceived relative to intended distance, causing virtual environments to appear smaller than they actually are. However, a brief period of interaction by walking through the virtual environment with visual feedback can cause dramatic improvement in perceived distance. The goal of the current project was to determine how quickly improvement occurs as a result of walking interaction (Experiment 1) and whether improvement is specific to the distances experienced during interaction, or whether improvement transfers across scales of space (Experiment 2). The results show that five interaction trials resulted in a large improvement in perceived distance, and that subsequent walking interactions showed continued but diminished improvement. Furthermore, interaction with near objects (1-2 m) improved distance perception for near but not far (4-5 m) objects, whereas interaction with far objects broadly improved distance perception for both near and far objects. These results have practical implications for ameliorating distance underperception in immersive virtual reality, as well as theoretical implications for distinguishing between theories of how walking interaction influences perceived distance

    Comparison of Two Methods for Improving Distance Perception in Virtual Reality

    Get PDF
    Distance is commonly underperceived in virtual environments (VEs) compared to real environments. Past work suggests that displaying a replica VE based on the real surrounding environment leads to more accurate judgments of distance, but that work has lacked the necessary control conditions to firmly make this conclusion. Other research indicates that walking through a VE with visual feedback improves judgments of distance and size. This study evaluated and compared those two methods for improving perceived distance in VEs. All participants experienced a replica VE based on the real lab. In one condition, participants visually previewed the real lab prior to experiencing the replica VE, and in another condition they did not. Participants performed blind-walking judgments of distance and also judgments of size in the replica VE before and after walking interaction. Distance judgments were more accurate in the preview compared to no preview condition, but size judgments were unaffected by visual preview. Distance judgments and size judgments increased after walking interaction, and the improvement was larger for distance than for size judgments. After walking interaction, distance judgments did not differ based on visual preview, and walking interaction led to a larger improvement in judged distance than did visual preview. These data suggest that walking interaction may be more effective than visual preview as a method for improving perceived space in a VE

    Perceived Space in the HTC Vive

    Get PDF
    Underperception of egocentric distance in virtual reality has been a persistent concern for almost 20 years. Modern headmounted displays (HMDs) appear to have begun to ameliorate underperception. The current study examined several aspects of perceived space in the HTC Vive. Blind-walking distance judgments, verbal distance judgments, and size judgments were measured in two distinct virtual environments (VEs)—a high-quality replica of a real classroom and an empty grass field—as well as the real classroom upon which the classroom VE was modeled. A brief walking interaction was also examined as an intervention for improving anticipated underperception in the VEs. Results from the Vive were compared to existing data using two older HMDs (nVisor SX111 and ST50). Blind-walking judgments were more accurate in the Vive compared to the older displays, and did not differ substantially from the real world nor across VEs. Size judgments were more accurate in the classroom VE than the grass VE and in the Vive compared to the older displays. Verbal judgments were significantly smaller in the classroom VE compared to the real classroom and did not significantly differ across VEs. Blind-walking and size judgments were more accurate after walking interaction, but verbal judgments were unaffected. The results indicate that underperception of distance in the HTC Vive is less than in older displays but has not yet been completely resolved. With more accurate space perception afforded by modern HMDs, alternative methods for improving judgments of perceived space—such as walking interaction—may no longer be necessary

    Recalibration of Perceived Distance in Virtual Environments Occurs Rapidly and Transfers Asymmetrically Across Scale

    Get PDF
    Distance in immersive virtual reality is commonly underperceived relative to intended distance, causing virtual environments to appear smaller than they actually are. However, a brief period of interaction by walking through the virtual environment with visual feedback can cause dramatic improvement in perceived distance. The goal of the current project was to determine how quickly improvement occurs as a result of walking interaction (Experiment 1) and whether improvement is specific to the distances experienced during interaction, or whether improvement transfers across scales of space (Experiment 2). The results show that five interaction trials resulted in a large improvement in perceived distance, and that subsequent walking interactions showed continued but diminished improvement. Furthermore, interaction with near objects (1-2 m) improved distance perception for near but not far (4-5 m) objects, whereas interaction with far objects broadly improved distance perception for both near and far objects. These results have practical implications for ameliorating distance underperception in immersive virtual reality, as well as theoretical implications for distinguishing between theories of how walking interaction influences perceived distance.This accepted article is published as Jonathan W. Kelly, William W. Hammel, Zachary D. Siegel, and Lori A. Sjolund. Recalibration of Perceived Distance in Virtual Environments Occurs Rapidly and Transfers Asymmetrically Across Scale. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, April 2014, 20(4); 588-595. Doi: 10.1109/TVCG.2014.36. Posted with permission. </p

    Improving distance perception in virtual reality

    Get PDF
    Virtual reality (VR) is a useful tool for researchers and instructors alike. VR allows for the development of scenarios which would be either too dangerous or too costly to create in the real world such as distracting a driver in a virtual vehicle. Unfortunately, distances tend to be underperceived within VR, and consequently, the validity of any training or research performed within a virtual environment could be called into question. In an effort to account for underperception, this project sought to establish an interaction task as both environment and task neutral that could be applied to the beginning of any virtual training or research task to correct underperception. Experiment 1 found that improvements in distance perception from an interaction task could likely be transferred from one environment to another but that there might be issues with removing distance cues from later environments. Experiment 2 found that the presence of walls drove the effect in experiment 1. Results also indicated that interacting with an environment likely encourages participants to rely on the given distance cues and therefore cause a decrement in performance when these cues are later removed. Experiment 3 gave evidence for the presence of both environment rescaling and behavioral recalibration as a result of interacting with a virtual environment. It also gave support for a more general rescaling that can improve performance at distances beyond those used for interaction

    Just Around the Corner: The Impact of Instruction Method and Corner Geometry on Teleoperation of Virtual Unmanned Ground Vehicles

    Get PDF
    Teleoperated robots have proven useful across various domains, as they can more readily search for survivors, survey collapsed and structurally unsound buildings, map out safe routes for rescue workers, and monitor rescue environments. A significant drawback of these robots is that they require the operator to perceive the environment indirectly. As such, camera angles, uneven terrain, lighting, and other environmental conditions can result in robots colliding with obstacles, getting stuck in rubble, and falling over (Casper & Murphy, 2003). To better understand how operators remotely perceive and navigate unmanned ground vehicles, the present work investigated operators’ abilities to negotiate corners of varying widths. In Experiment 1, we evaluated how instruction method impacts cornering time and collisions, looking specifically at the speed-accuracy tradeoff for negotiating corners. Participants navigated a virtual vehicle around corners under the instruction to focus on accuracy (i.e., avoiding collisions) or speed (i.e., negotiating the corners as quickly as possible). We found that as the task became more difficult, subjects’ cornering times increased, and their probability of successful cornering decreased. We also demonstrated that the Fitts’ law speed-accuracy tradeoff could be extended to a cornering task. In Experiment 2, we challenged two of the assumptions of Pastel et al.’s (2007) cornering law and assessed how corner angle and differences in path widths impacted cornering time. Participants navigated a virtual vehicle around corners of varying angles (45°, 90°, and 135°) and varying path widths. We found that increases in corner angle resulted in increased cornering times and a decreased probability of successful cornering. The findings from these experiments are applicable to contexts where an individual is tasked with remotely navigating around corners (e.g., video gaming, USAR, surveillance, military operations, training)

    Examining the Effects of Altered Avatars on Perception-Action in Virtual Reality

    Get PDF
    In virtual reality avatars are animated graphical representation of a person embedded in a virtual environment. Previous research has illustrated the benefits of having an avatar when perceiving aspects of virtual reality. We studied the effect that a non-faithful, or altered, avatar had on the perception of one\u27s action capabilities in VR. In Experiment 1, one group of participants acted with a normal, or faithful, avatar and the other group of participants used an avatar with an extended arm, all in virtual reality. In Experiment 2, the same methodology and procedure was used as in Experiment 1, except only the calibration phase occurred in VR, while the remaining reaches were completed in the real world. All participants performed reaches to various distances. The results of these studies show that calibration to altered dimensions of avatars is possible after receiving feedback while acting with the altered avatar. Further, calibration occurred more quickly when feedback was initially used to transition from a normal avatar to an altered avatar than when later transitioning from the altered avatar arm back to the normal avatar arm without feedback. The implications of these findings for training in virtual reality simulations and transfer back to the real world are also discussed

    Investigating Embodied Interaction in Near-Field Perception-Action Re-Calibration on Performance in Immersive Virtual Environments

    Get PDF
    Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users\u27 distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users\u27 reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users\u27 accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants\u27 physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant\u27s maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones\u27 sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user\u27s depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users\u27 interaction space depth perception in an IVE. Therefore, participants\u27 distance judgments would be improved after the calibration phase regardless of self-avatars\u27 visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world

    The Effect of Anthropometric Properties of Self-Avatars on Action Capabilities in Virtual Reality

    Get PDF
    The field of Virtual Reality (VR) has seen a steady exponential uptake in the last decade and is being continuously incorporated into areas of popular interest like healthcare, training, recreation and gaming. This steady upward trend and prolonged popularity has resulted in numerous extravagant virtual environments, some that aim to mimic real-life experiences like combat training, while others intend to provide unique experiences that may otherwise be difficult to recreate like flying over ancient Egypt as a bird. These experiences often showcase highly realistic graphics, intuitive interactions and unique avatar embodiment scenarios with the help of various tracking sensors, high definition graphic displays, sound systems, etc. The literature suggests that estimates and affordance judgments in VR scenarios such as the ones described above are affected by the properties and the nature of the avatar embodied by the user. Therefore, to provide users with the finest experiences it is crucial to understand the interaction between the embodied self and the action capabilities afforded by it in the surrounding virtual environment. In a series of studies aimed at exploring the effect of gender matched body-scaled self-avatars on the user\u27s perception, we investigate the effect of self-avatars on the perception of size of objects in an immersive virtual environment (IVE) and how this perception affects the actions one can perform as compared to the real world. In the process, we make use of newer tracking technology and graphic displays to investigate the perceived differences between real world environments and their virtual counterparts to understand how the spatial properties of the environment and the embodied self-avatars affect affordances by means of passability judgments. We describe techniques for creation and mapping VR environments onto their real world counterparts and the creation of gender matched body-scaled self-avatars that provides real time full-body tracking. The first two studies investigate how newer graphical displays and off-the-shelf tracking devices can be utilized to create salient gender matched body-scaled self-avatars and their effect on the judgment of passability as a result of the embodied body schema. The study involves creating complex scripts that automate the process of mapping virtual worlds onto their real world counterparts within a 1cm margin of error and the creation of self-avatars that match height, limb proportions and shoulder width of the participant using tracking sensors. The experiment involves making judgments about the passability of an adjustable doorway in the real world and in a virtual to-scale replica of the real world environment. The results demonstrated that the perception of affordances in IVEs is comparable to the real world but the behavior leading to it differs in VR. Also, the body-scaled self-avatars generated provide salient information yielding performance similar to the real world. Several insights and guidelines related to creating veridical virtual environments and realistic self-avatars were achieved from this effort. The third study investigates how the presence of body-scaled self-avatars affects the perception of size of virtual handheld objects and the influence of the person-plus-virtual-object system created by lifting the said virtual object on passability. This is crucial to understand as VR simulations now often utilize self-avatars that carry objects while maneuvering through the environment. How they interact with these handheld objects can influence what they do in critical scenarios where split second decisions can change the outcome like combat training, role-playing games, first person shooting, thrilling rides, physiotherapy, etc. It has also been reported that the avatar itself can influence the perception of size of virtual objects, in turn influencing action capabilities. There is ample research on different interaction techniques to manipulate objects in a virtual world but the question about how the objects affect our action capabilities upon interaction remains unanswered, especially when the haptic feedback associated with holding a real object is mismatched or missing. The study investigates this phenomenon by having participants interact with virtual objects of different sizes and making frontal and lateral passability judgments to an adjustable aperture similar to the first experiment. The results suggest that the presence of self-avatars significantly affects affordance judgments. Interestingly, frontal and lateral judgments in IVEs seem to similar unlike the real world. Investigating the concept of embodied body schema and its influence on action-capabilities further, the fourth study looks at how embodying self-avatars that may vary slightly from your real world body affect performance and behavior in dynamic affordance scenarios. In this particular study, we change the eye height of the participants in the presence or absence of self-avatars that are either bigger, smaller or the same size as the participant. We then investigate how this change in eye height and anthropometric properties of the self-avatar affects their judgments when crossing streets with oncoming traffic in virtual reality. We also evaluate any changes in the perceived walking speed as a result of embodying altered self-avatars. The findings suggest that the presence of self-avatars results in safer crossing behavior, however scaling the eye height or the avatar does not seem to affect the perceived walking speed. A detailed discussion on all the findings can be found in the manuscript
    corecore