3,711 research outputs found

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    The impact of background and context on car distance estimation

    Get PDF
    It is well established that people underestimate the distance to objects depicted in virtual environments and two-dimensional (2D) displays. The reasons for the underestimation are still not fully understood. It is becoming more common to use virtual environment displays for driver training and testing and so understanding the distortion of perceived space that occurs in these displays is vital. We need to know what aspects of the display cause the observer to misperceive the distance to objects in the simulated environments. The research reported in this thesis investigated how people estimate distance between themselves and a car in front of them, within a number of differing environmental contexts. Four experiments were run using virtual environment displays of various kinds and a fifth experiment was run in a real-world setting. It was found that distance underestimation when viewing 2D displays is very common, even when familiar objects such as cars are used as the targets. The experiments also verified that people have a greater underestimation of distance in a virtual environment compared to a real-world setting. A surprising and somewhat counterintuitive result was that people underestimate distance more when the scene depicts forward motion of the observer compared to a static view. The research also identified a number of visual features in the display (e.g., texture information) and aspects of the display (e.g., field of view) that affected the perception of distance or that had no effect. The findings should help the designers of driver-training simulators and testing equipment to better understand the types of errors that can potentially occur when humans view two-dimensional virtual environment displays

    Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation

    Full text link
    Pre-captured immersive environments using omnidirectional cameras provide a wide range of virtual reality applications. Previous research has shown that manipulating the eye height in egocentric virtual environments can significantly affect distance perception and immersion. However, the influence of eye height in pre-captured real environments has received less attention due to the difficulty of altering the perspective after finishing the capture process. To explore this influence, we first propose a pilot study that captures real environments with multiple eye heights and asks participants to judge the egocentric distances and immersion. If a significant influence is confirmed, an effective image-based approach to adapt pre-captured real-world environments to the user's eye height would be desirable. Motivated by the study, we propose a learning-based approach for synthesizing novel views for omnidirectional images with altered eye heights. This approach employs a multitask architecture that learns depth and semantic segmentation in two formats, and generates high-quality depth and semantic segmentation to facilitate the inpainting stage. With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation. Quantitative and qualitative evaluation shows favorable results against state-of-the-art methods, and an extensive user study verifies improved perception and immersion for pre-captured real-world environments.Comment: 10 pages, 13 figures, 3 tables, submitted to ISMAR 202

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work

    From surround to true 3-D

    Get PDF
    To progress from surround sound to true 3-D requires an updating of the psychoacoustical theories which underlie current technologies. This paper shows how J.J.Gibson’s ecological approach to perception can be applied to audio perception and used to derive 3-D audio technologies based on intelligent pattern recognition and active hypothesis testing. These technologies are suggested as methods which can be used to generate audio environments that are believable and can be explored

    Audio, visual, and audio-visual egocentric distance perception in virtual environments.

    No full text
    International audiencePrevious studies have shown that in real environments, distances are visually correctly estimated. In visual (V) virtual environments (VEs), distances are systematically underestimated. In audio (A) real and virtual environments, near distances (2 m) are underestimated. However, little is known regarding combined A and V interactions on the egocentric distance perception in VEs. In this paper we present a study of A, V, and AV egocentric distance perception in VEs. AV rendering is provided via the SMART-I2 platform using tracked passive visual stereoscopy and acoustical wave field synthesis (WFS). Distances are estimated using triangulated blind walking under A, V, and AV conditions. Distance compressions similar to those found in previous studies are observed under each rendering condition. The audio and visual modalities appears to be of similar precision for distance estimations in virtual environments. This casts doubts on the commonly accepted visual capture theory in distance perception

    Viewing medium affects arm motor performance in 3D virtual environments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>2D and 3D virtual reality platforms are used for designing individualized training environments for post-stroke rehabilitation. Virtual environments (VEs) are viewed using media like head mounted displays (HMDs) and large screen projection systems (SPS) which can influence the quality of perception of the environment. We estimated if there were differences in arm pointing kinematics when subjects with and without stroke viewed a 3D VE through two different media: HMD and SPS.</p> <p>Methods</p> <p>Two groups of subjects participated (healthy control, n = 10, aged 53.6 ± 17.2 yrs; stroke, n = 20, 66.2 ± 11.3 yrs). Arm motor impairment and spasticity were assessed in the stroke group which was divided into mild (n = 10) and moderate-to-severe (n = 10) sub-groups based on Fugl-Meyer Scores. Subjects pointed (8 times each) to 6 randomly presented targets located at two heights in the ipsilateral, middle and contralateral arm workspaces. Movements were repeated in the same VE viewed using HMD (Kaiser XL50) and SPS. Movement kinematics were recorded using an Optotrak system (Certus, 6 markers, 100 Hz). Upper limb motor performance (precision, velocity, trajectory straightness) and movement pattern (elbow, shoulder ranges and trunk displacement) outcomes were analyzed using repeated measures ANOVAs.</p> <p>Results</p> <p>For all groups, there were no differences in endpoint trajectory straightness, shoulder flexion and shoulder horizontal adduction ranges and sagittal trunk displacement between the two media. All subjects, however, made larger errors in the vertical direction using HMD compared to SPS. Healthy subjects also made larger errors in the sagittal direction, slower movements overall and used less range of elbow extension for the lower central target using HMD compared to SPS. The mild and moderate-to-severe sub-groups made larger RMS errors with HMD. The only advantage of using the HMD was that movements were 22% faster in the moderate-to-severe stroke sub-group compared to the SPS.</p> <p>Conclusions</p> <p>Despite the similarity in majority of the movement kinematics, differences in movement speed and larger errors were observed for movements using the HMD. Use of the SPS may be a more comfortable and effective option to view VEs for upper limb rehabilitation post-stroke. This has implications for the use of VR applications to enhance upper limb recovery.</p

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624
    • 

    corecore