2,130 research outputs found

    NaviFields: relevance fields for adaptive VR navigation

    Get PDF
    Virtual Reality allow users to explore virtual environments naturally, by moving their head and body. However, the size of the environments they can explore is limited by real world constraints, such as the tracking technology or the physical space available. Existing techniques removing these limitations often break the metaphor of natural navigation in VR (e.g. steering techniques), involve control commands (e.g., teleporting) or hinder precise navigation (e.g., scaling user's displacements). This paper proposes NaviFields, which quantify the requirements for precise navigation of each point of the environment, allowing natural navigation within relevant areas, while scaling users' displacements when travelling across non-relevant spaces. This expands the size of the navigable space, retains the natural navigation metaphor and still allows for areas with precise control of the virtual head. We present a formal description of our NaviFields technique, which we compared against two alternative solutions (i.e., homogeneous scaling and natural navigation). Our results demonstrate our ability to cover larger spaces, introduce minimal disruption when travelling across bigger distances and improve very significantly the precise control of the viewpoint inside relevant areas

    GiAnt: stereoscopic-compliant multi-scale navigation in VEs

    Get PDF
    International audienceNavigation in multi-scale virtual environments (MSVE) requires the adjustment of the navigation parameters to ensure optimal navigation experiences at each level of scale. In particular, in immersive stereoscopic systems, e.g. when performing zoom-in and zoom-out operations, the navigation speed and the stereoscopic rendering parameters have to be adjusted accordingly. Although this adjustment can be done manually by the user, it can be complex, tedious and strongly depends on the virtual environment. In this work we propose a new multi-scale navigation technique named GiAnt (GIant/ANT) which automatically and seamlessly adjusts the navigation speed and the scale factor of the virtual environment based on the user's perceived navigation speed. The adjustment ensures an almost-constant perceived navigation speed while avoiding diplopia effects or diminished depth perception due to improper stereoscopic rendering configurations. The results from the conducted user evaluation shows that GiAnt is an efficient multi-scale navigation which minimizes the changes of the scale factor of the virtual environment compared to state-of-the-art multi-scale navigation techniques

    Designing, testing and adapting navigation techniques for the immersive web

    Get PDF
    One of the most essential interactions in Virtual Reality (VR) is the user’s ability to move around and explore the virtual environment. The design of the navigation technique plays a crucial role in the user experience since it determines key usability aspects. VR devices allow for an immersive exploration of 3D worlds, but navigation in VR is challenging for many users, due to potential usability issues related to specific VR controllers, user skills, and motion sickness. Although hundreds of interaction techniques have been proposed for this task, VR navigation still poses a high entry barrier for many users. In this paper we argue that adapting the navigation technique to its context of use can lead to substantial improvements in navigation usability and accessibility. The context of use includes the type of scene, the available physical space, as well as the profile of the user. We present a test platform to facilitate the design and fine-tuning of interaction techniques for 3D navigation. We focus on mainstream VR devices (headsets and controllers) and support the most common navigation metaphors (walking, flying, teleportation). The key idea is to let developers specify, at runtime, the exact mapping between user actions and locomotion changes, for any of the supported metaphors. Such mappings are described by a collection of parameters (e.g. maximum speed) whose values can be adjusted interactively through a GUI, or be provided by user-defined code which can be edited at runtime. Feedback obtained from developers suggests that this approach can be used to quickly adapt the navigation techniques to various people including persons with no previous 3D navigation skills, elderly people, and people with disabilities, as well as to the type, size and semantics of the virtual environment.This work has been funded by MCIN/AEI/10.13039/501100011033/FEDER ‘‘A way to make Europe’’. Pedret model partially funded by EU Horizon 2020, JPICH Conservation, Protection and Use initiative (JPICH-0127) and the Spanish Agencia Estatal de Investigación, grant PCI2020-111979 Enhancement of Heritage Experiences: the Middle Ages; Digital Layered Models of Architecture and Mural Paintings over Time (EHEM)Peer ReviewedPostprint (published version

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    A motion control method for a differential drive robot based on human walking for immersive telepresence

    Get PDF
    Abstract. This thesis introduces an interface for controlling Differential Drive Robots (DDRs) for telepresence applications. Our goal is to enhance immersive experience while reducing user discomfort, when using Head Mounted Displays (HMDs) and body trackers. The robot is equipped with a 360° camera that captures the Robot Environment (RE). Users wear an HMD and use body trackers to navigate within a Local Environment (LE). Through a live video stream from the robot-mounted camera, users perceive the RE within a virtual sphere known as the Virtual Environment (VE). A proportional controller was employed to facilitate the control of the robot, enabling to replicate the movements of the user. The proposed method uses chest tracker to control the telepresence robot and focuses on minimizing vection and rotations induced by the robot’s motion by modifying the VE, such as rotating and translating it. Experimental results demonstrate the accuracy of the robot in reaching target positions when controlled through the body-tracker interface. Additionally, it also reveals an optimal VE size that effectively reduces VR sickness and enhances the sense of presence

    Modeling online adaptive navigation in virtual environments based on PID control

    Full text link
    It is well known that locomotion-dominated navigation tasks may highly provoke cybersickness effects. Past research has proposed numerous approaches to tackle this issue based on offline considerations. In this work, a novel approach to mitigate cybersickness is presented based on online adaptative navigation. Considering the Proportional-Integral-Derivative (PID) control method, we proposed a mathematical model for online adaptive navigation parameterized with several parameters, taking as input the users' electro-dermal activity (EDA), an efficient indicator to measure the cybersickness level, and providing as output adapted navigation accelerations. Therefore, minimizing the cybersickness level is regarded as an argument optimization problem: find the PID model parameters which can reduce the severity of cybersickness. User studies were organized to collect non-adapted navigation accelerations and the corresponding EDA signals. A deep neural network was then formulated to learn the correlation between EDA and navigation accelerations. The hyperparameters of the network were obtained through the Optuna open-source framework. To validate the performance of the optimized online adaptive navigation developed through the PID control, we performed an analysis in a simulated user study based on the pre-trained deep neural network. Results indicate a significant reduction of cybersickness in terms of EDA signal analysis and motion sickness dose value. This is a pioneering work which presented a systematic strategy for adaptive navigation settings from a theoretical point

    Navigating Immersive and Interactive VR Environments With Connected 360° Panoramas

    Get PDF
    Emerging research is expanding the idea of using 360-degree spherical panoramas of real-world environments for use in 360 VR experiences beyond video and image viewing. However, most of these experiences are strictly guided, with few opportunities for interaction or exploration. There is a desire to develop experiences with cohesive virtual environments created with 360 VR that allow for choice in navigation, versus scripted experiences with limited interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing appropriate user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. To tackle this gap, we designed RealNodes, a software system that presents an interactive and explorable 360 VR environment. We also developed four visual guidance UIs for 360 VR navigation. The results of a pilot study showed that choice of UI had a significant effect on task completion times, showing one of our methods, Arrow, was best. Arrow also exhibited positive but non-significant trends in average measures with preference, user engagement, and simulator-sickness. RealNodes, the UI designs, and the pilot study results contribute preliminary information that inspire future investigation of how to design effective explorable scenarios in 360 VR and visual guidance metaphors for navigation in applications using 360 VR environments

    Immersive 360° video for forensic education

    Get PDF
    Throughout the globe, training in the investigation of forensic crime scene work is a vital part of the overall training process within Police Academies and forensic programs throughout the world. However, the exposure of trainee forensic officers to real life scenes, by instructors, is minimal due to the delicate nature of information presented within them and the overall difficulty of Forensic investigations. Virtual Reality (VR) is computer technology utilising headsets, to produce lifelike imageries, sounds and perceptions simulating physical presence inside a virtual setting to a user. The user is able to look around the virtual world and often interact with virtual landscapes or objects. VR headsets are head‐mounted goggles with a screen in front of the eyes (Burdea & Coffet 2003). The use of VR varies widely from personal gaming to classroom learning. Uses also include computerised tools that are used solely online. The current use of VR within Forensic Science is that it is used widely in several capacities that include the training and examination of new forensic officers. However, there is minimal review and authentication of the efficiency of VR use for the teaching of forensic investigation. This is surprising, as the VR field has experienced rapid expansion in the educating of many varying fields over the past few years. Even though VR could enhance forensic training by offering another, perhaps more versatile, engaging way of learning, no devoted VR application has yet been commercially implemented for forensic examination education. Research into VR is a fairly young field, however the technology and use of it is still rapidly growing and the improvement of interactive tools is inevitably having an impact on all facets of learning and teaching

    Augmented reality and its aspects: a case study for heating systems

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade TecnolĂłgica Federal do ParanĂĄThanks to the advances of technology in various domains, and the mixing between real and virtual worlds. Allowed this master’s thesis to explore concepts related to virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR). The development and comparison of Android applications and Microsoft HoloLens aimed to solve a deadlock in the recognition of instructions by the users. We used an interactive manual of assembly and disassembly for taps of residential heaters. Therefore, this work deals with three main parts. Firstly, the exploration of the concepts of VR, AR, MR, and XR. Secondly, 3D modeling and animations techniques. Finally, the development of applications using Vuforia, Wikitude, and MRTK. The users tried our application “HeaterGuideAR” to verify the effectiveness of the instruction passed by the interactive manual. Only a few users had some difficulties at the beginning of the trials. Thus, it was necessary to provide aid tools. However, other users were able to disassemble the faucet without any external help. We suggest continuing this work with more explorations, models, and situations.Graças aos Ășltimos avanços tecnolĂłgicos em diversas ĂĄreas deram a possibilidade de fazer a mistura do mundo real com o virtual. É com este intuito que esta tese de mestrado veio expor os conceitos relacionados Ă  realidade virtual (RV), realidade aumentada (RA), realidade mista (RM) e realidade estendida (RE). O desenvolvimento e comparação de aplicativos Android e Microsoft HoloLens teve como objetivo resolver um impasse no entendimento de instruçÔes por parte dos usuĂĄrios. Utilizamos um manual interativo para montagem e desmontagem de torneiras de aquecedores residenciais. Este trabalho, portanto, lida com trĂȘs partes principais. Na primeira, a exploração dos conceitos de RV, RA, RM e RE. Na segunda, modelagem 3D e tĂ©cnicas de animaçÔes. E por fim, o desenvolvimento de aplicaçÔes usando Vuforia, Wikitude e MRTK. A aplicação “HeaterGuideAR” foi testada pelos usuĂĄrios afim de verificar a eficĂĄcia da instrução passada pelo manual interativo. Apenas alguns usuĂĄrios tiveram algumas dificuldades no inĂ­cio dos testes. Sendo que, foi necessĂĄrio fornecer algumas ferramentas de auxĂ­lio. Mesmo assim, outros usuĂĄrios conseguiram desmontar a torneira sem ajuda externa. Sugerimos continuar este trabalho com mais exploraçÔes, modelos e situaçÔes.Mestrado de dupla diplomação com a UTFPR - Universidade TecnolĂłgica Federal do Paran
    • 

    corecore