2,387 research outputs found

    Controlled Interaction: Strategies For Using Virtual Reality To Study Perception

    Get PDF
    Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations

    Efficient Distance Accuracy Estimation Of Real-World Environments In Virtual Reality Head-Mounted Displays

    Get PDF
    Virtual reality (VR) is a very promising technology with many compelling industrial applications. As many advancements have been made recently to deploy and use VR technology in virtual environments, they are still less mature to be used to render real environments. The current VR systems settings, which are developed for virtual environments rendering, fail to adequately address the challenges of capturing and displaying real-world virtual reality that these systems entail. Before these systems can be used in real life settings, their performance needs to be investigated, more specifically, depth perception and how distances to objects in the rendered scenes are estimated. The perceived depth is influenced by Head Mounted Displays (HMD) that inevitability decrease the virtual content’s depth perception. Distances are consistently underestimated in virtual environments (VEs) compared to the real world. The reason behind this underestimation is still not understood. This thesis investigates another version of this kind of system, that to the best of authors knowledge has not been explored by any previous research. Previous research used a computer-generated scene. This work is examining distance estimation in real environments rendered to Head-Mounted Displays, where distance estimations is among the most challenging issues that are still investigated and not fully understood.This thesis introduces a dual-camera video feed system through a virtual reality head mounted display with two models: a video-based and a static photo-based model, in which, the purpose is to explore whether the misjudgment of distances in HMDs could be due to a lack of realism, or not, with the use of a real-world scene rendering system. Distance judgments performance in the real world and these two evaluated VE models were compared using protocols already proven to accurately measure real-world distance estimations. An improved model based on enhancing the field of view (FOV) of the displayed scenes to improve distance judgements when displaying real-world VR content to HMDs was developed; allowing to mitigate the limited FOV, which is among the first potential causes of distance underestimation, specially, the mismatch of FOV between the camera and the HMD field of views. The proposed model is using a set of two cameras to generate the video instead of hundreds of input cameras or tens of cameras mounted on a circular rig as previous works from the literature. First Results from the first implementation of this system found that when the model was rendered as static photo-based, the underestimation was less as compared with the live video feed rendering. The video-based (real + HMD) model and the static photo-based (real + photo + HMD) model averaged 80.2% of the actual distance, and 81.4% respectively compared to the Real-World estimations that averaged 92.4%. The improved developed approach (Real + HMD + FOV) was compared to these two models and showed an improvement of 11%, increasing the estimation accuracy from 80% to 91% and reducing the estimation error from 1.29% to 0.56%. This thesis results present strong evidence of the need for novel distance estimation improvements methods for real world VR content systems and provides effective initial work towards this goal

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    Owning an overweight or underweight body: distinguishing the physical, experienced and virtual body

    Get PDF
    Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant’s experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups

    Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation

    Full text link
    Pre-captured immersive environments using omnidirectional cameras provide a wide range of virtual reality applications. Previous research has shown that manipulating the eye height in egocentric virtual environments can significantly affect distance perception and immersion. However, the influence of eye height in pre-captured real environments has received less attention due to the difficulty of altering the perspective after finishing the capture process. To explore this influence, we first propose a pilot study that captures real environments with multiple eye heights and asks participants to judge the egocentric distances and immersion. If a significant influence is confirmed, an effective image-based approach to adapt pre-captured real-world environments to the user's eye height would be desirable. Motivated by the study, we propose a learning-based approach for synthesizing novel views for omnidirectional images with altered eye heights. This approach employs a multitask architecture that learns depth and semantic segmentation in two formats, and generates high-quality depth and semantic segmentation to facilitate the inpainting stage. With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation. Quantitative and qualitative evaluation shows favorable results against state-of-the-art methods, and an extensive user study verifies improved perception and immersion for pre-captured real-world environments.Comment: 10 pages, 13 figures, 3 tables, submitted to ISMAR 202

    Peripheral visual cues and their effect on the perception of egocentric depth in virtual and augmented environments

    Get PDF
    The underestimation of depth in virtual environments at mediumield distances is a well studied phenomenon. However, the degree by which underestimation occurs varies widely from one study to the next, with some studies reporting as much as 68% underestimation in distance and others with as little as 6% (Thompson et al. [38] and Jones et al. [14]). In particular, the study detailed in Jones et al. [14] found a surprisingly small underestimation effect in a virtual environment (VE) and no effect in an augmented environment (AE). These are highly unusual results when compared to the large body of existing work in virtual and augmented distance judgments [16, 31, 36–38, 40–43]. The series of experiments described in this document attempted to determine the cause of these unusual results. Specifically, Experiment I aimed to determine if the experimental design was a factor and also to determine if participants were improving their performance throughout the course of the experiment. Experiment II analyzed two possible sources of implicit feedback in the experimental procedures and identified visual information available in the lower periphery as a key source of feedback. Experiment III analyzed distance estimation when all peripheral visual information was eliminated. Experiment IV then illustrated that optical flow in a participant’s periphery is a key factor in facilitating improved depth judgments in both virtual and augmented environments. Experiment V attempted to further reduce cues in the periphery by removing a strongly contrasting white surveyor’s tape from the center of the hallway, and found that participants continued to significantly adapt even when given very sparse peripheral cues. The final experiment, Experiment VI, found that when participants’ views are restricted to the field-of-view of the screen area on the return walk, adaptation still occurs in both virtual and augmented environments

    Reconstructing passively travelled manoeuvres: Visuo-vestibular interactions.

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the other conditions. Part of the illusionary reconstructed trajectories could be explained if we assume that the subjects based their reconstruction on the ego-motion percept obtained during the stimulus' initial moments. In the current paper, we test this hypothesis using a novel paradigm. If indeed the final reconstruction is governed by the initial percept, then additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. We supplied extra-retinal stimuli tuned to supplement the information that was underrepresented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement; perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but it could also lead to less veridical reconstructions in other conditions

    Visuo-vestibular interaction in the reconstruction of travelled trajectories

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the others. Part of the illusionary reconstructed trajectories could be explained by assuming that subjects base their reconstruction on the ego-motion percept built during the stimulus' initial moments . In the current paper, we test this hypothesis using a novel paradigm: if the final reconstruction is governed by the initial percept, providing additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. The extra-retinal stimulus was tuned to supplement the information that was under-represented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement: perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but to a less veridical reconstruction in other conditions
    • …
    corecore