21 research outputs found

    Edge-Centric Space Rescaling with Redirected Walking for Dissimilar Physical-Virtual Space Registration

    Full text link
    We propose a novel space-rescaling technique for registering dissimilar physical-virtual spaces by utilizing the effects of adjusting physical space with redirected walking. Achieving a seamless immersive Virtual Reality (VR) experience requires overcoming the spatial heterogeneities between the physical and virtual spaces and accurately aligning the VR environment with the user's tracked physical space. However, existing space-matching algorithms that rely on one-to-one scale mapping are inadequate when dealing with highly dissimilar physical and virtual spaces, and redirected walking controllers could not utilize basic geometric information from physical space in the virtual space due to coordinate distortion. To address these issues, we apply relative translation gains to partitioned space grids based on the main interactable object's edge, which enables space-adaptive modification effects of physical space without coordinate distortion. Our evaluation results demonstrate the effectiveness of our algorithm in aligning the main object's edge, surface, and wall, as well as securing the largest registered area compared to alternative methods under all conditions. These findings can be used to create an immersive play area for VR content where users can receive passive feedback from the plane and edge in their physical environment.Comment: This paper has been accepted as a paper for the 2023 ISMAR conference (2023/10/16-2023/10/20) 10 pages, 5 figure

    Projection-based Registration Using a Multi-view Camera for

    Get PDF
    Abstrac

    SWAG demo : smart watch assisted gesture interaction for mixed reality head-mounted displays

    Get PDF
    In this demonstration, we will show a prototype system with sensor fusion approach to robustly track 6 degrees of freedom of hand movement and support intuitive hand gesture interaction and 3D object manipulation for Mixed Reality head-mounted displays. Robust tracking of hand and finger with egocentric camera remains a challenging problem, especially with self-occlusion – for example, when user tries to grab a virtual object in midair by closing the palm. Our approach leverages the use of a common smart watch worn on the wrist to provide a more reliable palm and wrist orientation data, while fusing the data with camera to achieve robust hand motion and orientation for interaction.Postprin

    Systematic literature review on user logging in virtual reality

    Get PDF
    In this systematic literature review, we study the role of user logging in virtual reality research. By categorizing literature according to data collection methods and identifying reasons for data collection, we aim to find out how popular user logging is in virtual reality research. In addition, we identify publications with detailed descriptions about logging solutions. Our results suggest that virtual reality logging solutions are relatively seldom described in detail despite that many studies gather data by body tracking. Most of the papers gather data to witness something about a novel functionality or to compare different technologies without discussing logging details. The results can be used for scoping future virtual reality research.acceptedVersionPeer reviewe

    Real-time interactive modeling and scalable multiple object tracking for AR

    No full text
    We propose a real-time solution for modeling and tracking multiple 3D objects in unknown environments for Augmented Reality. The proposed solution consists of both scalable tracking and interactive modeling. Our contribution is twofold: First, we show how to scale with the number of objects using keyframes. This is done by combining recent techniques for image retrieval and online Structure from Motion, which can be run in parallel. As a result, tracking 50 objects in 3D can be done within 6-35 ms per frame, even under difficult conditions for tracking. Second, we propose a method to let the user add new objects very quickly. The user simply has to select in an image a 2D region lying on the object. A 3D primitive is then fitted to the features within this region, and adjusted to create the object 3D model. We demonstrate the modeling of polygonal and circular-based objects. In practice, this procedure takes less than a minute. (c) 2012 Elsevier Ltd. All rights reserved

    Scalable Real-time Planar Targets Tracking for Digilog Books

    Get PDF
    We propose a novel 3D tracking method that supports several hundreds of pre-trained potential planar targets without losing real-time performance. This goes well beyond the state-of-the-art, and to reach this level of performances, two threads run in parallel: The foreground thread tracks feature points from frame-toframe to ensure real-time performances, while a background thread aims at recognizing the visible targets and estimating their poses. The latter relies on a coarseto-fine approach: Assuming that one target is visible at a time, which is reasonable for digilog books applications, it first recognizes the visible target with an image retrieval algorithm, then matches feature points between the target and the input image to estimate the target pose. This background thread is more demanding than the foreground one, and is therefore several times slower. We therefore propose a simple but effective mechanism for the background thread to communicate its results to the foreground thread without lag. Our implementation runs at more than 125 frames per second, with 314 potential planar targets. Its applicability is demonstrated with an Augmented Reality book application
    corecore