280 research outputs found

    NaviFields: relevance fields for adaptive VR navigation

    Get PDF
    Virtual Reality allow users to explore virtual environments naturally, by moving their head and body. However, the size of the environments they can explore is limited by real world constraints, such as the tracking technology or the physical space available. Existing techniques removing these limitations often break the metaphor of natural navigation in VR (e.g. steering techniques), involve control commands (e.g., teleporting) or hinder precise navigation (e.g., scaling user's displacements). This paper proposes NaviFields, which quantify the requirements for precise navigation of each point of the environment, allowing natural navigation within relevant areas, while scaling users' displacements when travelling across non-relevant spaces. This expands the size of the navigable space, retains the natural navigation metaphor and still allows for areas with precise control of the virtual head. We present a formal description of our NaviFields technique, which we compared against two alternative solutions (i.e., homogeneous scaling and natural navigation). Our results demonstrate our ability to cover larger spaces, introduce minimal disruption when travelling across bigger distances and improve very significantly the precise control of the viewpoint inside relevant areas

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Navigating Immersive and Interactive VR Environments With Connected 360° Panoramas

    Get PDF
    Emerging research is expanding the idea of using 360-degree spherical panoramas of real-world environments for use in 360 VR experiences beyond video and image viewing. However, most of these experiences are strictly guided, with few opportunities for interaction or exploration. There is a desire to develop experiences with cohesive virtual environments created with 360 VR that allow for choice in navigation, versus scripted experiences with limited interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing appropriate user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. To tackle this gap, we designed RealNodes, a software system that presents an interactive and explorable 360 VR environment. We also developed four visual guidance UIs for 360 VR navigation. The results of a pilot study showed that choice of UI had a significant effect on task completion times, showing one of our methods, Arrow, was best. Arrow also exhibited positive but non-significant trends in average measures with preference, user engagement, and simulator-sickness. RealNodes, the UI designs, and the pilot study results contribute preliminary information that inspire future investigation of how to design effective explorable scenarios in 360 VR and visual guidance metaphors for navigation in applications using 360 VR environments

    Designing better spaces for people: Virtual reality and biometric sensing as tools to evaluate space use

    Get PDF
    We present a pilot study aiming to explore the use of biometrics sensing technology within a semi-immersive VR environment, where users face architectural spaces which induce them sensations close to fear of heights, claustrophobia, frustration and relief. Electrodermal activity was used to detect users’ emotional arousal, while navigating in VR. Navigation conditions and participants’ expertise with games were controlled. Main results show that physiological measurement of user’s perceptions can discriminate well "positive" from "negative" spaces, providing designers with basic information on people’s emotional state when using the buildings they design.info:eu-repo/semantics/publishedVersio

    New VR Navigation Techniques to Reduce Cybersickness

    Get PDF
    In nowadays state of the art VR environments, displayed in CAVEs or HMDs, navigation technics may frequently induce cybersickness or VR-Induced Symptoms and Effects (VRISE), drastically limiting the friendly use of VR environments with no navigation limitations. In two distinct experiments, we investigated acceleration VRISE thresholds for longitudinal and rotational motions and compared 3 different VR systems: 2 CAVEs and a HMD (Oculus Rift DK2). We found that VRISE occur more often and more strongly in case of rotational motions and found no major difference between the CAVEs and the HMD. Based on the obtained thresholds we developed a new "Head Lock" navigation method for rotational motions in a virtual environment in order to generate a “Pseudo AR” mode, keeping fixed visual outside world references. Thanks to a third experiment we have shown that this new metaphor significantly reduces VRISE occurrences and may be a useful base for future natural navigation technics

    Predicting real world spatial disorientation in Alzheimer's disease patients using virtual reality navigation tests

    Get PDF
    Spatial navigation impairments in Alzheimer's disease (AD) have been suggested to underlie patients experiencing spatial disorientation. Though many studies have highlighted navigation impairments for AD patients in virtual reality (VR) environments, the extent to which these impairments predict a patient's risk for spatial disorientation in the real world is still poorly understood. The aims of this study were to (a) investigate the spatial navigation abilities of AD patients in VR environments as well as in a real world community setting and (b) explore whether we could predict patients at a high risk for spatial disorientation in the community based on their VR navigation. Sixteen community-dwelling AD patients and 21 age/gender matched controls were assessed on their egocentric and allocentric navigation abilities in VR environments using the Virtual Supermarket Test (VST) and Sea Hero Quest (SHQ) as well as in the community using the Detour Navigation Test (DNT). When compared to controls, AD patients exhibited impairments on the VST, SHQ, and DNT. For patients, only SHQ wayfinding distance and wayfinding duration significantly predicted composite disorientation score on the DNT (β = 0.422, p = 0.034, R2 = 0.299 and β = 0.357, p = 0.046, R2 = 0.27 respectively). However, these same VR measures could not reliably predict which patients were at highest risk of spatial disorientation in the community (p > 0.1). Future studies should focus on developing VR-based tests which can predict AD patients at high risk of getting spatially disorientated in the real world

    Modulating the performance of VR navigation tasks using different methods of presenting visual information

    Get PDF
    Spatial navigation is an essential ability in our daily lives that we use to move through different locations. In Virtual Reality (VR), the environments that users navigate may be large and similar to real world places. It is usually desirable to guide users in order to prevent them from getting lost and to make it easier for them to reach the goal or discover important spots in the environment. However, doing so in a way that the guidance is not intrusive, breaking the immersion and sense of presence, nor too hard to notice, therefore not being useful, can be a challenge. In this work we conducted an experiment in which we adapted a probabilistic learning paradigm: the Weather Prediction task to spatial navigation in VR. Subjects navigated one of the two versions of procedurally generated T-junction mazes in Virtual Reality. In one version, the environment contained visual cues in the form of street signs whose presence predicted the correct turning direction. In the other version the cues were present, but were not predictive. Results showed that when subjects navigated the mazes with the predictive cues they made less mistakes, and therefore the cues helped them navigate the environments. A comparison with previous Neuroscience literature revealed that the strategies used by subjects to solve the task were different than in the original 2D experiment. This work is intended to be used as a basis to further improve spatial navigation in VR with more immersive and implicit methods, and as another example of how the Cognitive Neurosicence and Virtual Reality research fields can greatly benefit each other

    Virtual reality in neurologic rehabilitation of spatial disorientation

    Get PDF
    BACKGROUND: Topographical disorientation (TD) is a severe and persistent impairment of spatial orientation and navigation in familiar as well as new environments and a common consequence of brain damage. Virtual reality (VR) provides a new tool for the assessment and rehabilitation of TD. In VR training programs different degrees of active motor control over navigation may be implemented (i.e. more passive spatial navigation vs. more active). Increasing demands of active motor control may overload those visuo-spatial resources necessary for learning spatial orientation and navigation. In the present study we used a VR-based verbally-guided passive navigation training program to improve general spatial abilities in neurologic patients with spatial disorientation. METHODS: Eleven neurologic patients with focal brain lesions, which showed deficits in spatial orientation, as well as 11 neurologic healthy controls performed a route finding training in a virtual environment. Participants learned and recalled different routes for navigation in a virtual city over five training sessions. Before and after VR training, general spatial abilities were assessed with standardized neuropsychological tests. RESULTS: Route finding ability in the VR task increased over the five training sessions. Moreover, both groups improved different aspects of spatial abilities after VR training in comparison to the spatial performance before VR training. CONCLUSIONS: Verbally-guided passive navigation training in VR enhances general spatial cognition in neurologic patients with spatial disorientation as well as in healthy controls and can therefore be useful in the rehabilitation of spatial deficits associated with TD
    corecore