70 research outputs found

    Efficient Distance Accuracy Estimation Of Real-World Environments In Virtual Reality Head-Mounted Displays

    Get PDF
    Virtual reality (VR) is a very promising technology with many compelling industrial applications. As many advancements have been made recently to deploy and use VR technology in virtual environments, they are still less mature to be used to render real environments. The current VR systems settings, which are developed for virtual environments rendering, fail to adequately address the challenges of capturing and displaying real-world virtual reality that these systems entail. Before these systems can be used in real life settings, their performance needs to be investigated, more specifically, depth perception and how distances to objects in the rendered scenes are estimated. The perceived depth is influenced by Head Mounted Displays (HMD) that inevitability decrease the virtual content’s depth perception. Distances are consistently underestimated in virtual environments (VEs) compared to the real world. The reason behind this underestimation is still not understood. This thesis investigates another version of this kind of system, that to the best of authors knowledge has not been explored by any previous research. Previous research used a computer-generated scene. This work is examining distance estimation in real environments rendered to Head-Mounted Displays, where distance estimations is among the most challenging issues that are still investigated and not fully understood.This thesis introduces a dual-camera video feed system through a virtual reality head mounted display with two models: a video-based and a static photo-based model, in which, the purpose is to explore whether the misjudgment of distances in HMDs could be due to a lack of realism, or not, with the use of a real-world scene rendering system. Distance judgments performance in the real world and these two evaluated VE models were compared using protocols already proven to accurately measure real-world distance estimations. An improved model based on enhancing the field of view (FOV) of the displayed scenes to improve distance judgements when displaying real-world VR content to HMDs was developed; allowing to mitigate the limited FOV, which is among the first potential causes of distance underestimation, specially, the mismatch of FOV between the camera and the HMD field of views. The proposed model is using a set of two cameras to generate the video instead of hundreds of input cameras or tens of cameras mounted on a circular rig as previous works from the literature. First Results from the first implementation of this system found that when the model was rendered as static photo-based, the underestimation was less as compared with the live video feed rendering. The video-based (real + HMD) model and the static photo-based (real + photo + HMD) model averaged 80.2% of the actual distance, and 81.4% respectively compared to the Real-World estimations that averaged 92.4%. The improved developed approach (Real + HMD + FOV) was compared to these two models and showed an improvement of 11%, increasing the estimation accuracy from 80% to 91% and reducing the estimation error from 1.29% to 0.56%. This thesis results present strong evidence of the need for novel distance estimation improvements methods for real world VR content systems and provides effective initial work towards this goal

    CAVE Size Matters: Effects of Screen Distance and Parallax on Distance Estimation in Large Immersive Display Setups

    Get PDF
    International audienceWhen walking within a CAVE-like system, accommodation distance, parallax and angular resolution vary according to the distance between the user and the projection walls which can alter spatial perception. As these systems get bigger, there is a need to assess the main factors influencing spatial perception in order to better design immersive projection systems and virtual reality applications. Such analysis is key for application domains which require the user to explore virtual environments by moving through the physical interaction space. In this article we present two experiments which analyze distance perception when considering the distance towards the projection screens and parallax as main factors. Both experiments were conducted in a large immersive projection system with up to ten meter interaction space. The first experiment showed that both the screen distance and parallax have a strong asymmetric effect on distance judgments. We observed increased underestimation for positive parallax conditions and slight distance overestimation for negative and zero parallax conditions. The second experiment further analyzed the factors contributing to these effects and confirmed the observed effects of the first experiment with a high-resolution projection setup providing twice the angular resolution and improved accommodative stimuli. In conclusion, our results suggest that space is the most important characteristic for distance perception, optimally requiring about 6 to 7-meter distance around the user, and virtual objects with high demands on accurate spatial perception should be displayed at zero or negative parallax

    A Hybrid Projection to Widen the Vertical Field of View with Large Screens to Improve the Perception of Personal Space in Architectural Project Review

    Get PDF
    In this paper, we suggest using a hybrid projection to increase the vertical geometric field of view without incurring large deformations to preserve distance perception and to allow the seeing of the surrounding ground. We have conducted an experiment in furnished and unfurnished houses to evaluate the perception of distances and the spatial comprehension. Results show that the hybrid projection improves the perception of surrounding ground which leads to an improvement in the spatial comprehension. Moreover, it preserves the perception of distances and sizes by providing a performance similar to the perspective one in the task of distance estimation

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Phenomenal regression as a potential metric of veridical perception in virtual environments

    Get PDF
    It is known that limitations of the visual presentation and sense of presence in a virtual environment (VE) can result in deficits of spatial perception such as the documented depth compression phenomena. Investigating size and distance percepts in a VE is an active area of research, where different groups have measured the deficit by employing skill-based tasks such as walking, throwing or simply judging sizes and distances. A psychological trait called phenomenal regression (PR), first identified in the 1930s by Thouless, offers a measure that does not rely on either judgement or skill. PR describes a systematic error made by subjects when asked to match the perspective projections of two stimuli displayed at different distances. Thouless’ work found that this error is not mediated by a subject’s prior knowledge of its existence, nor can it be consciously manipulated, since it measures an individual’s innate reaction to visual stimuli. Furthermore he demonstrated that, in the real world, PR is affected by the depth cues available for viewing a scene. When applied in a VE, PR therefore potentially offers a direct measure of perceptual veracity that is independent of participants’ skill in judging size or distance. Experimental work has been conducted and a statistically significant correlation of individuals’ measured PR values (their ‘Thouless ratio’, or TR) between virtual and physical stimuli was found. A further experiment manipulated focal depth to mitigate the mismatch that occurs between accommodation and vergence cues in a VE. The resulting statistically significant effect on TR demonstrates that it is sensitive to changes in viewing conditions in a VE. Both experiments demonstrate key properties of PR that contribute to establishing it as a robust indicator of VE quality. The first property is that TR exhibits temporal stability during the period of testing and the second is that it differs between individuals. This is advantageous as it yields empirical values that can be investigated using regression analysis. This work contributes to VE domains in which it is desirable to replicate an accurate perception of space, such as training and telepresence, where PR would be a useful tool for comparing subjective experience between a VE and the real world, or between different VEs

    The development of a hybrid virtual reality/video view-morphing display system for teleoperation and teleconferencing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2000.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 84-89).The goal of this study is to extend the desktop panoramic static image viewer concept (e.g., Apple QuickTime VR; IPIX) to support immersive real time viewing, so that an observer wearing a head-mounted display can make free head movements while viewing dynamic scenes rendered in real time stereo using video data obtained from a set of fixed cameras. Computational experiments by Seitz and others have demonstrated the feasibility of morphing image pairs to render stereo scenes from novel, virtual viewpoints. The user can interact both with morphed real world video images, and supplementary artificial virtual objects (“Augmented Reality”). The inherent congruence of the real and artificial coordinate frames of this system reduces registration errors commonly found in Augmented Reality applications. In addition, the user’s eyepoint is computed locally so that any scene lag resulting from head movement will be less than those from alternative technologies using remotely controlled ground cameras. For space applications, this can significantly reduce the apparent lag due to satellite communication delay. This hybrid VR/view-morphing display (“Virtual Video”) has many important NASA applications including remote teleoperation, crew onboard training, private family and medical teleconferencing, and telemedicine. The technical objective of this study developed a proof-of-concept system using a 3D graphics PC workstation of one of the component technologies, Immersive Omnidirectional Video, of Virtual Video. The management goal identified a system process for planning, managing, and tracking the integration, test and validation of this phased, 3-year multi-university research and development program.by William E. Hutchison.S.M

    Spatial and human factors affecting image quality and viewer experience of stereoscopic 3D in television and cinema

    Get PDF
    PhD ThesisThe horizontal offset in the two eyes’ locations in the skull means that they receive slightly different images of the world. The visual cortex uses these disparities to calculate where in depth different objects are, absolutely (physical distance from the viewer, perceived very imprecisely) and relatively (whether one object is in front of another, perceived with great precision). For well over a century, stereoscopic 3D (S3D) technology has existed which can generate an artificial sense of depth by displaying images with slight disparities to the different retinas. S3D technology is now considerably cheaper to access in the home, but remains a niche market, partly reflecting problems with viewer experience and enjoyment of S3D. This thesis considers some of the factors that could affect viewer experience of S3D content. While S3D technology can give a vivid depth percept, it can also lead to distortions in perceived size and shape, particularly if content is viewed at the wrong distance or angle. Almost all S3D content is designed for a viewing angle perpendicular to the screen, and with a recommended viewing distance, but little is known about the viewing distance typically used for S3D, or the effect of viewing angle. Accordingly, Chapter 2 of this thesis reports a survey of members of the British public. Chapters 3 and 4 report two experiments, one designed to assess the effect of oblique viewing, and another to consider the interaction between S3D and perceived size. S3D content is expensive to generate, hence producers sometimes “fake” 3D by shifting 2D content behind the screen plane. Chapter 5 investigates viewer experience with this fake 3D, and finds it is not a viable substitute for genuine S3D while also examining whether viewers fixate on different image features when video content is viewed in S3D, as compared to 2D.part-funded by BSkyB and EPSRC as a CASE PhD studentship supporting PH

    Augmented reality and scene examination

    Get PDF
    The research presented in this thesis explores the impact of Augmented Reality on human performance, and compares this technology with Virtual Reality using a head-mounted video-feed for a variety of tasks that relate to scene examination. The motivation for the work was the question of whether Augmented Reality could provide a vehicle for training in crime scene investigation. The Augmented Reality application was developed using fiducial markers in the Windows Presentation Foundation, running on a wearable computer platform; Virtual Reality was developed using the Crytek game engine to present a photo-realistic 3D environment; and a video-feed was provided through head-mounted webcam. All media were presented through head-mounted displays of similar resolution to provide the sole source of visual information to participants in the experiments. The experiments were designed to increase the amount of mobility required to conduct the search task, i.e., from rotation in the horizontal or vertical plane through to movement around a room. In each experiment, participants were required to find objects and subsequently recall their location. It is concluded that human performance is affected not merely via the medium through which the world is perceived but moreover, the constraints governing how movement in the world is controlled
    • 

    corecore