2,618 research outputs found

    Monte-Carlo Redirected Walking: Gain Selection Through Simulated Walks

    Get PDF
    We present Monte-Carlo Redirected Walking (MCRDW), a gain selection algorithm for redirected walking. MCRDW applies the Monte-Carlo method to redirected walking by simulating a large number of simple virtual walks, then inversely applying redirection to the virtual paths. Different gain levels and directions are applied, producing differing physical paths. Each physical path is scored and the results used to select the best gain level and direction. We provide a simple example implementation and a simulation-based study for validation. In our study, when compared with the next best technique, MCRDW reduced incidence of boundary collisions by over 50% while reducing total rotation and position gain

    Adaptive Optimization Algorithm for Resetting Techniques in Obstacle-ridden Environments

    Get PDF

    Redirected Walking in Infinite Virtual Indoor Environment Using Change-blindness

    Full text link
    We present a change-blindness based redirected walking algorithm that allows a user to explore on foot a virtual indoor environment consisting of an infinite number of rooms while at the same time ensuring collision-free walking for the user in real space. This method uses change blindness to scale and translate the room without the user's awareness by moving the wall while the user is not looking. Consequently, the virtual room containing the current user always exists in the valid real space. We measured the detection threshold for whether the user recognizes the movement of the wall outside the field of view. Then, we used the measured detection threshold to determine the amount of changing the dimension of the room by moving that wall. We conducted a live-user experiment to navigate the same virtual environment using the proposed method and other existing methods. As a result, users reported higher usability, presence, and immersion when using the proposed method while showing reduced motion sickness compared to other methods. Hence, our approach can be used to implement applications to allow users to explore an infinitely large virtual indoor environment such as virtual museum and virtual model house while simultaneously walking in a small real space, giving users a more realistic experience.Comment: https://www.youtube.com/watch?v=s-ZKavhXxd

    ARC: Alignment-based Redirection Controller for Redirected Walking in Complex Environments

    Full text link
    We present a novel redirected walking controller based on alignment that allows the user to explore large and complex virtual environments, while minimizing the number of collisions with obstacles in the physical environment. Our alignment-based redirection controller, ARC, steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. To quantify a controller's performance in complex environments, we introduce a new metric, Complexity Ratio (CR), to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. Through extensive simulation-based experiments, we show that ARC significantly outperforms current state-of-the-art controllers in its ability to steer the user on a collision-free path. We also show through quantitative and qualitative measures of performance that our controller is robust in complex environments with many obstacles. Our method is applicable to arbitrary environments and operates without any user input or parameter tweaking, aside from the layout of the environments. We have implemented our algorithm on the Oculus Quest head-mounted display and evaluated its performance in environments with varying complexity. Our project website is available at https://gamma.umd.edu/arc/

    LoCoMoTe – a framework for classification of natural locomotion in VR by task, technique and modality

    Get PDF
    Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions

    Edge-Centric Space Rescaling with Redirected Walking for Dissimilar Physical-Virtual Space Registration

    Full text link
    We propose a novel space-rescaling technique for registering dissimilar physical-virtual spaces by utilizing the effects of adjusting physical space with redirected walking. Achieving a seamless immersive Virtual Reality (VR) experience requires overcoming the spatial heterogeneities between the physical and virtual spaces and accurately aligning the VR environment with the user's tracked physical space. However, existing space-matching algorithms that rely on one-to-one scale mapping are inadequate when dealing with highly dissimilar physical and virtual spaces, and redirected walking controllers could not utilize basic geometric information from physical space in the virtual space due to coordinate distortion. To address these issues, we apply relative translation gains to partitioned space grids based on the main interactable object's edge, which enables space-adaptive modification effects of physical space without coordinate distortion. Our evaluation results demonstrate the effectiveness of our algorithm in aligning the main object's edge, surface, and wall, as well as securing the largest registered area compared to alternative methods under all conditions. These findings can be used to create an immersive play area for VR content where users can receive passive feedback from the plane and edge in their physical environment.Comment: This paper has been accepted as a paper for the 2023 ISMAR conference (2023/10/16-2023/10/20) 10 pages, 5 figure

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed
    • …
    corecore