196 research outputs found

    The Effects of Finger-Walking in Place (FWIP) on Spatial Knowledge Acquisition in Virtual Environments

    Get PDF
    Spatial knowledge, necessary for efficient navigation, comprises route knowledge (memory of landmarks along a route) and survey knowledge (overall representation like a map). Virtual environments (VEs) have been suggested as a power tool for understanding some issues associated with human navigation, such as spatial knowledge acquisition. The Finger-Walking-in-Place (FWIP) interaction technique is a locomotion technique for navigation tasks in immersive virtual environments (IVEs). The FWIP was designed to map a human’s embodied ability overlearned by natural walking for navigation, to finger-based interaction technique. Its implementation on Lemur and iPhone/iPod Touch devices was evaluated in our previous studies. In this paper, we present a comparative study of the joystick’s flying technique versus the FWIP. Our experiment results show that the FWIP results in better performance than the joystick’s flying for route knowledge acquisition in our maze navigation tasks

    Curve and surface framing for scientific visualization and domain dependent navigation

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1996Curves and surfaces are two of the most fundamental types of objects in computer graphics. Most existing systems use only the 3D positions of the curves and surfaces, and the 3D normal directions of the surfaces, in the visualization process. In this dissertation, we attach moving coordinate frames to curves and surfaces, and explore several applications of these frames in computer graphics and scientific visualization. Curves in space are difficult to perceive and analyze, especially when they are densely clustered, as is typical in computational fluid dynamics and volume deformation applications. Coordinate frames are useful for exposing the similarities and differences between curves. They are also useful for constructing ribbons, tubes and smooth camera orientations along curves. In many 3D systems, users interactively move the camera around the objects with a mouse or other device. But all the camera control is done independently of the properties of the objects being viewed, as if the user is flying freely in space. This type of domain-independent navigation is frequently inappropriate in visualization applications and is sometimes quite difficult for the user to control. Another productive approach is to look at domain-specific constraints and thus to create a new class of navigation strategies. Based on attached frames on surfaces, we can constrain the camera gaze direction to be always parallel (or at a fixed angle) to the surface normal. Then users will get a feeling of driving on the object instead of flying through the space. The user's mental model of the environment being visualized can be greatly enhanced by the use of these constraints in the interactive interface. Many of our research ideas have been implemented in Mesh View, an interactive system for viewing and manipulating geometric objects. It contains a general purpose C++ library for nD geometry and supports a winged-edge based data structure. Dozens of examples of scientifically interesting surfaces have been constructed and included with the system

    Shake-Your-Head: Revisiting Walking-In-Place for Desktop Virtual Reality

    Get PDF
    International audienceThe Walking-In-Place interaction technique was introduced to navigate infinitely in 3D virtual worlds by walking in place in the real world. The technique has been initially developed for users standing in immersive setups and was built upon sophisticated visual displays and tracking equipments. In this paper, we propose to revisit the whole pipeline of the Walking-In-Place technique to match a larger set of configurations and apply it notably to the context of desktop Virtual Reality. With our novel "Shake-Your-Head" technique, the user is left with the possibility to sit down, and to use small screens and standard input devices such as a basic webcam for tracking. The locomotion simulation can compute various motions such as turning, jumping and crawling, using as sole input the head movements of the user. We also introduce the use of additional visual feedback based on camera motions to enhance the walking sensations. An experiment was conducted to compare our technique with classical input devices used for navigating in desktop VR. Interestingly, the results showed that our technique could even allow faster navigations when sitting, after a short learning. Our technique was also perceived as more fun and increasing presence, and was generally more appreciated for VR navigation

    An investigation of entorhinal spatial representations in self-localisation behaviours

    Get PDF
    Spatial-modulated cells of the medial entorhinal cortex (MEC) and neighbouring cortices are thought to provide the neural substrate for self-localisation behaviours. These cells include grid cells of the MEC which are thought to compute path integration operations to update self-location estimates. In order to read this grid code, downstream cells are thought to reconstruct a positional estimate as a simple rate-coded representation of space. Here, I show the coding scheme of grid cell and putative readout cells recorded from mice performing a virtual reality (VR) linear location task which engaged mice in both beaconing and path integration behaviours. I found grid cells can encode two unique coding schemes on the linear track, namely a position code which reflects periodic grid fields anchored to salient features of the track and a distance code which reflects periodic grid fields without this anchoring. Grid cells were found to switch between these coding schemes within sessions. When grid cells were encoding position, mice performed better at trials that required path integration but not on trials that required beaconing. This result provides the first mechanistic evidence linking grid cell activity to path integration-dependent behaviour. Putative readout cells were found in the form of ramp cells which fire proportionally as a function of location in defined regions of the linear track. This ramping activity was found to be primarily explained by track position rather than other kinematic variables like speed and acceleration. These representations were found to be maintained across both trial types and outcomes indicating they likely result from recall of the track structure. Together, these results support the functional importance of grid and ramp cells for self-localisation behaviours. Future investigations will look into the coherence between these two neural populations, which may together form a complete neural system for coding and decoding self-location in the brain

    Automatic detection of disorientation among people with dementia

    Get PDF
    Ageing is characterized by decline in cognition including visuospatial function, necessary for independently executing instrumental activities of daily living. The onset of Alzheimer’s disease dementia exacerbates this decline, leading to major challenges for patients and increased burden for caregivers. An important function affected by this decline is spatial orientation. This work provides insight into substrates of real-world wayfinding challenges among older adults, with emphasis on viable features aiding the detection of spatial disorientation and design of possible interventions
    • 

    corecore