186,408 research outputs found

    Assessing Depth Perception in Virtual Environments: A Comprehensive Framework

    Get PDF
    Understanding humans’ perception of depth and how they interact with virtual environments is a challenging task. This context involves investigating how features of these environments affect depth perception, which is crucial for tasks like object manipulation and navigation that require interpreting spatial information. This article presents a comprehensive (general, extensible and flexible) framework to assess depth perception in different virtual environments to support the development of more effective and immersive virtual experiences. This approach can assist developers in decision-making regarding different approaches for assessing depth perception in virtual environments, considering stereoscopic and monoscopic techniques for visualization. The framework considers parameters such as the distance between the user and virtual objects and the sizes of virtual objects. Metrics such as hit rate, response time, and presence questionnaire responses were utilized to assess depth perception. The previous experiments are presented (anaglyph and shutter glasses), as well as the new experiments, considering cave environments with and without anaglyph glasses

    Perceived Depth in Virtual and Physical Environments

    Get PDF
    Theoretically, stereopsis provides accurate depth information if information regarding absolute distance is accurate and reliable. However, assessments of stereopsis often report depth distortions, particularly for virtual stimuli. These distortions are often attributed to misestimates of viewing distance caused by limited distance cues and/or the presence of conflicts between ocular distance cues in virtual displays. To understand how these factors contribute to depth distortions, I conducted a series of experiments in which depth was estimated under a range of viewing conditions and cue combinations. In the first series (Chapter 2), I evaluated if conflicts between oculomotor distance cues drive depth underconstancy observed in virtual environments by comparing judgments of virtual and physical objects. The results showed that depth judgments of physical stimuli were accurate and exhibited depth constancy, but judgments of virtual stimuli failed to achieve depth constancy. This failure was due in part to the presence of the vergence-accommodation conflict. Further, prior experience with each environment had a profound effect on depth judgments, e.g., performance in virtual environments was enhanced by limited exposure to a similar task using physical objects. In Chapter 3, I assessed if limitations of virtual environments contributed to previous failures of linear combination models to account for the integration of stereopsis and motion cues. I measured the perceived depth of virtual and physical objects defined by motion parallax, binocular disparity, or their combination. Accuracy was remarkedly similar for both environments, but estimates were more precise when depth was defined by binocular disparity than motion parallax. A linear combination model did not adequately describe performance in either physical or virtual conditions. In Chapter 4, I evaluated if reaching to virtual objects provides distance information that can be used to scale stereopsis using an interactive ring game. Brief experience reaching to virtual objects improved the accuracy and scaling of subsequent depth judgements. Overall, experience with physical objects or reaching-in-depth enhanced performance on tasks dependent on distance perception. To fully understand how binocular depth perception is used to interact with objects in the real world, it is important to assess these cues in a rich, full-cue natural scenes

    Perceived Depth in Virtual and Physical Environments

    Get PDF
    Theoretically, stereopsis provides accurate depth information if information regarding absolute distance is accurate and reliable. However, assessments of stereopsis often report depth distortions, particularly for virtual stimuli. These distortions are often attributed to misestimates of viewing distance caused by limited distance cues and/or the presence of conflicts between ocular distance cues in virtual displays. To understand how these factors contribute to depth distortions, I conducted a series of experiments in which depth was estimated under a range of viewing conditions and cue combinations. In the first series (Chapter 2), I evaluated if conflicts between oculomotor distance cues drive depth underconstancy observed in virtual environments by comparing judgments of virtual and physical objects. The results showed that depth judgments of physical stimuli were accurate and exhibited depth constancy, but judgments of virtual stimuli failed to achieve depth constancy. This failure was due in part to the presence of the vergence-accommodation conflict. Further, prior experience with each environment had a profound effect on depth judgments, e.g., performance in virtual environments was enhanced by limited exposure to a similar task using physical objects. In Chapter 3, I assessed if limitations of virtual environments contributed to previous failures of linear combination models to account for the integration of stereopsis and motion cues. I measured the perceived depth of virtual and physical objects defined by motion parallax, binocular disparity, or their combination. Accuracy was remarkedly similar for both environments, but estimates were more precise when depth was defined by binocular disparity than motion parallax. A linear combination model did not adequately describe performance in either physical or virtual conditions. In Chapter 4, I evaluated if reaching to virtual objects provides distance information that can be used to scale stereopsis using an interactive ring game. Brief experience reaching to virtual objects improved the accuracy and scaling of subsequent depth judgements. Overall, experience with physical objects or reaching-in-depth enhanced performance on tasks dependent on distance perception. To fully understand how binocular depth perception is used to interact with objects in the real world, it is important to assess these cues in a rich, full-cue natural scenes

    Do learners experience spatial and social presence in interactive environments based on 360-degree panoramas?: A pilot study and future research agenda

    Get PDF
    The unforeseeable outbreak and progression of the covid-19-pandemic accompanied by crucial measures of both social and spatial distancing has emphasized digital technologies’ role within the spotlight of educational research and practice. A major challenge of technology-enhanced education is the preservation of the spatio-social character of learning despite distance. Virtual learning spaces hold the potential to spatially situate learning and make learners feel like actually being in a real physical learning environment (e. g., Hartmann & Bannert, 2022; Eiris et al., 2020, Makransky & Mayer, 2022). Compared to highly immersive virtual realities that are, for instance, accessed through a head-mounted display, interactive virtual learning environments based on 360-degree panoramas are less expensive to produce while seeming to enable comparable experiences of presence (Eiris et al., 2020; Ritter & Chambers, 2021). However, this rather novel learning format has not yet been empirically investigated in depth, neither in terms of its foundation nor regarding learners’ experience of presence during its use. The subsequently described study aims to provide a first in-depth insight into learners’ spatial and social presence experience in interactive 360-degree panoramas. Therefore, we first summarize the current theoretical and mpirical state of research. We then present the methodological approach and results of the study conducted, discuss them critically, and derive a comprehensive agenda for follow-up research. [Aus: Introduction

    Mediating presence in virtual design thinking workshops

    Get PDF
    Design teams have been collaborating virtually due to the increasing demands of the globalized industry. The COVID-19 pandemic made virtual collaboration a necessity due to social restrictions imposed globally in 2020 and 2021. Design teams use virtual Design Thinking to collaborate remotely in real-time. The outcomes of Virtual Design Thinking rely on team composition, planning, the structure of activities, time management, and the choice of space and tools. While these factors have been researched in the context of traditional Design Thinking workshops, research on the selection of tools in virtual workshops is scarce due to the sudden increase in popularity and demand. This thesis investigates the experience of participants in virtual Design Thinking workshops with a focus on the collaborative environment and the tools used within. Existing literature and participatory observations revealed that remote teams collaborate primarily in two-dimensional (2D) virtual environments using a combination of virtual whiteboards and video conferencing software. Participants face challenges due to the lack of 'presence.' Presence is an emerging topic in recent literature, especially in the context of immersive virtual environments such as three-dimensional (3D) and Virtual Reality (VR). However, these virtual environments are still in their infancy and require further development for conducting virtual Design Thinking. Qualitative research in the form of participatory observations of four virtual design thinking workshops and in-depth interviews of seven participants revealed the challenges participants face due to the lack of presence. Approaches to mediate presence were explored with the design of a 2D experimental virtual collaborative environment designed to support virtual Design Thinking methods based on the findings. The environment was tested with seven participants. Results indicated an improvement in participants' experience compared to existing virtual collaboration environments and reported the overall experience to be on par with traditional Design Thinking workshops. The outcome of this thesis has vital implications on the choice and future development of virtual collaboration tools in the post-pandemic world

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    A Content-Analysis Approach for Exploring Usability Problems in a Collaborative Virtual Environment

    Get PDF
    As Virtual Reality (VR) products are becoming more widely available in the consumer market, improving the usability of these devices and environments is crucial. In this paper, we are going to introduce a framework for the usability evaluation of collaborative 3D virtual environments based on a large-scale usability study of a mixedmodality collaborative VR system. We first review previous literature about important usability issues related to collaborative 3D virtual environments, supplemented with our research in which we conducted 122 interviews after participants solved a collaborative virtual reality task. Then, building on the literature review and our results, we extend previous usability frameworks. We identified twelve different usability problems, and based on the causes of the problems, we grouped them into three main categories: VR environment-, device interaction-, and task-specific problems. The framework can be used to guide the usability evaluation of collaborative VR environments
    • …
    corecore