21,100 research outputs found

    The Impact of Object-Based Grouping on Perceived Depth Magnitude

    Get PDF
    The amount of depth perceived between two vertical lines is markedly reduced when those lines are connected. Previously, this effect has been shown to be related to perceptual grouping of elements to form an object. The aim of the experiments reported here is to evaluate the generalizability of this phenomenon, to better understand its role in perception of depth from disparity in natural stimuli. I found that depth estimates were not affected by configuration over a range of suprathreshold disparities, in the presence of additional, reliable cues to depth. Taken together, these results show that previously reported reduction in perceived depth from perceptual grouping is restricted to specific viewing conditions and stimuli. Moreover, the effect is modulated by several factors including the presence or absence of orientation disparity, and the availability and consistency of other depth cues

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    The influence of restricted viewing conditions on egocentric distance perception: implications for real and virtual environments

    Get PDF
    technical reportThree experiments examined the influence of field of view and binocular viewing restrictions on absolute distance perception in the real world. Previous work has found that visually directed walking tasks reveal accurate distance estimations in full-cue, real world environments to distances of about 20 meters. In contrast, the same tasks in virtual environments using headmounted displays (HMDs) show large compression of distance. Field of view and binocular viewing are common limitations in research with HMDs and have been rarely studied under full pictorial-cue conditions in the context of distance perception in the real world. Experiment 1 determined that the view of one?s body and feet on the floor was not necessary for accurate distance perception. Experiment 2 manipulated horizontal field of view and head rotation, finding that a restricted field of view did not affect the accuracy of distance estimations when head movement was allowed. Experiment 3 found that performance with monocular viewing was equal to that with binocular viewing. These results have implications for the information needed to scale egocentric distance in the real world and suggest that field of view and binocular viewing restrictions do not largely contribute to the underestimation seen with HMDs

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation

    Full text link
    Pre-captured immersive environments using omnidirectional cameras provide a wide range of virtual reality applications. Previous research has shown that manipulating the eye height in egocentric virtual environments can significantly affect distance perception and immersion. However, the influence of eye height in pre-captured real environments has received less attention due to the difficulty of altering the perspective after finishing the capture process. To explore this influence, we first propose a pilot study that captures real environments with multiple eye heights and asks participants to judge the egocentric distances and immersion. If a significant influence is confirmed, an effective image-based approach to adapt pre-captured real-world environments to the user's eye height would be desirable. Motivated by the study, we propose a learning-based approach for synthesizing novel views for omnidirectional images with altered eye heights. This approach employs a multitask architecture that learns depth and semantic segmentation in two formats, and generates high-quality depth and semantic segmentation to facilitate the inpainting stage. With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation. Quantitative and qualitative evaluation shows favorable results against state-of-the-art methods, and an extensive user study verifies improved perception and immersion for pre-captured real-world environments.Comment: 10 pages, 13 figures, 3 tables, submitted to ISMAR 202

    Depth and Distance Perceptions within Virtual Reality Environments. A Comparison between HMDs and CAVEs in Architectural Design

    Get PDF
    The Perceptions of Depth and Distance are considered as two of the most important factors in Virtual Reality Environments, as these environments inevitability impact the perception of the virtual content compared with the one of the real world. Many studies on depth and distance perceptions in a virtual environment exist. Most of them were conducted using Head-Mounted Displays (HMDs) and less with large screen displays such as those of Cave Automatic Virtual Environments (CAVEs). In this paper, we make a comparison between the different aspects of perception in the architectural environment between CAVE systems and HMD. This paper clarifies the Virtual Object as an entity in a VE and also the pros and cons of using CAVEs and HMDs are explained. Eventually, just a first survey of the planned case study of the artificial port of the Trajan emperor near Fiumicino has been done as for COVID-19 an on-field experimentation could not have been performed

    Follow the leader: Visual control of speed in pedestrian following

    Get PDF
    When people walk together in groups or crowds they must coordinate their walking speed and direction with their neighbors. This paper investigates how a pedestrian visually controls speed when following a leader on a straight path (one-dimensional following). To model the behavioral dynamics of following, participants in Experiment 1 walked behind a confederate who randomly increased or decreased his walking speed. The data were used to test six models of speed control that used the leader's speed, distance, or combinations of both to regulate the follower's acceleration. To test the optical information used to control speed, participants in Experiment 2 walked behind a virtual moving pole, whose visual angle and binocular disparity were independently manipulated. The results indicate the followers match the speed of the leader, and do so using a visual control law that primarily nulls the leader's optical expansion (change in visual angle), with little influence of change in disparity. This finding has direct applications to understanding the coordination among neighbors in human crowds

    Stereoscopic 3D Technologies for Accurate Depth Tasks: A Theoretical and Empirical Study

    Get PDF
    In the last decade an increasing number of application fields, including medicine, geoscience and bio-chemistry, have expressed a need to visualise and interact with data that are inherently three-dimensional. Stereoscopic 3D technologies can offer a valid support for these operations thanks to the enhanced depth representation they can provide. However, there is still little understanding of how such technologies can be used effectively to support the performance of visual tasks based on accurate depth judgements. Existing studies do not provide a sound and complete explanation of the impact of different visual and technical factors on depth perception in stereoscopic 3D environments. This thesis presents a new interpretative and contextualised analysis of the vision science literature to clarify the role of di®erent visual cues on human depth perception in such environments. The analysis identifies luminance contrast, spatial frequency, colour, blur, transparency and depth constancies as influential visual factors for depth perception and provides the theoretical foundation for guidelines to support the performance of accurate stereoscopic depth tasks. A novel assessment framework is proposed and used to conduct an empirical study to evaluate the performance of four distinct classes of 3D display technologies. The results suggest that 3D displays are not interchangeable and that the depth representation provided can vary even between displays belonging to the same class. The study also shows that interleaved displays may suffer from a number of aliasing artifacts, which in turn may affect the amount of perceived depth. The outcomes of the analysis of the influential visual factors for depth perception and the empirical comparartive study are used to propose a novel universal 3D cursor prototype suitable to support depth-based tasks in stereoscopic 3D environments. The contribution includes a number of both qualitative and quantitative guidelines that aim to guarantee a correct perception of depth in stereoscopic 3D environments and that should be observed when designing a stereoscopic 3D cursor
    corecore