2,681 research outputs found
Recommended from our members
Wayfinding and Glaucoma: A Virtual Reality Experiment.
PurposeWayfinding, the process of determining and following a route between an origin and a destination, is an integral part of everyday tasks. The purpose of this study was to investigate the impact of glaucomatous visual field loss on wayfinding behavior using an immersive virtual reality (VR) environment.MethodsThis cross-sectional study included 31 glaucomatous patients and 20 healthy subjects without evidence of overall cognitive impairment. Wayfinding experiments were modeled after the Morris water maze navigation task and conducted in an immersive VR environment. Two rooms were built varying only in the complexity of the visual scene in order to promote allocentric-based (room A, with multiple visual cues) versus egocentric-based (room B, with single visual cue) spatial representations of the environment. Wayfinding tasks in each room consisted of revisiting previously visible targets that subsequently became invisible.ResultsFor room A, glaucoma patients spent on average 35.0 seconds to perform the wayfinding task, whereas healthy subjects spent an average of 24.4 seconds (P = 0.001). For room B, no statistically significant difference was seen on average time to complete the task (26.2 seconds versus 23.4 seconds, respectively; P = 0.514). For room A, each 1-dB worse binocular mean sensitivity was associated with 3.4% (P = 0.001) increase in time to complete the task.ConclusionsGlaucoma patients performed significantly worse on allocentric-based wayfinding tasks conducted in a VR environment, suggesting visual field loss may affect the construction of spatial cognitive maps relevant to successful wayfinding. VR environments may represent a useful approach for assessing functional vision endpoints for clinical trials of emerging therapies in ophthalmology
Challenges for identifying the neural mechanisms that support spatial navigation: the impact of spatial scale.
Spatial navigation is a fascinating behavior that is essential for our everyday lives. It involves nearly all sensory systems, it requires numerous parallel computations, and it engages multiple memory systems. One of the key problems in this field pertains to the question of reference frames: spatial information such as direction or distance can be coded egocentrically-relative to an observer-or allocentrically-in a reference frame independent of the observer. While many studies have associated striatal and parietal circuits with egocentric coding and entorhinal/hippocampal circuits with allocentric coding, this strict dissociation is not in line with a growing body of experimental data. In this review, we discuss some of the problems that can arise when studying the neural mechanisms that are presumed to support different spatial reference frames. We argue that the scale of space in which a navigation task takes place plays a crucial role in determining the processes that are being recruited. This has important implications, particularly for the inferences that can be made from animal studies in small scale space about the neural mechanisms supporting human spatial navigation in large (environmental) spaces. Furthermore, we argue that many of the commonly used tasks to study spatial navigation and the underlying neuronal mechanisms involve different types of reference frames, which can complicate the interpretation of neurophysiological data
Which way do I go? Neural activation in response to feedback and spatial processing in a virtual T-maze
In 2 human event-related brain potential (ERP) experiments, we examined the feedback error-related negativity (fERN), an ERP component associated with reward processing by the midbrain dopamine system, and the N170, an ERP component thought to be generated by the medial temporal lobe (MTL), to investigate the contributions of these neural systems toward learning to find rewards in a "virtual T-maze" environment. We found that feedback indicating the absence versus presence of a reward differentially modulated fERN amplitude, but only when the outcome was not predicted by an earlier stimulus. By contrast, when a cue predicted the reward outcome, then the predictive cue (and not the feedback) differentially modulated fERN amplitude. We further found that the spatial location of the feedback stimuli elicited a large N170 at electrode sites sensitive to right MTL activation and that the latency of this component was sensitive to the spatial location of the reward, occurring slightly earlier for rewards following a right versus left turn in the maze. Taken together, these results confirm a fundamental prediction of a dopamine theory of the fERN and suggest that the dopamine and MTL systems may interact in navigational learning tasks
Master of Science
thesisThe Morris water maze is a task adapted from the animal spatial cognition literature and has been studied in the context of sex differences in humans, particularly because of the standard design, which manipulates proximal (close) and distal (far) cues. However, there are mixed findings with respect to the interaction of cues and sex differences in virtual Morris water maze tasks, which may be attributed to variations in the scale of the space and previously unmeasured individual differences. We explore the question of scale and context by presenting participants with an outdoor virtual Morris water maze that is four times the size of the mazes previously tested. We also measured lifetime mobility and mental rotation skills. Results of this study suggest that for the small-scale environment, males and females performed similarly when asked to navigate with only proximal cues. However, males outperformed females when only distal cues were visible. In the large-scale environment, males outperformed females in both cue conditions. Additionally, greater mental rotation skills predicted better navigation performance with proximal cues only. Finally, we found that highly mobile females and males perform equally well when navigating with proximal cues
The Effects of Finger-Walking in Place (FWIP) on Spatial Knowledge Acquisition in Virtual Environments
Spatial knowledge, necessary for efficient navigation, comprises route knowledge (memory of landmarks along a route) and survey knowledge (overall representation like a map). Virtual environments (VEs) have been suggested as a power tool for understanding some issues associated with human navigation, such as spatial knowledge acquisition. The Finger-Walking-in-Place (FWIP) interaction technique is a locomotion technique for navigation tasks in immersive virtual environments (IVEs). The FWIP was designed to map a human’s embodied ability overlearned by natural walking for navigation, to finger-based interaction technique. Its implementation on Lemur and iPhone/iPod Touch devices was evaluated in our previous studies. In this paper, we present a comparative study of the joystick’s flying technique versus the FWIP. Our experiment results show that the FWIP results in better performance than the joystick’s flying for route knowledge acquisition in our maze navigation tasks
Testing the acquisition and use of navigation strategies in humans using a virtual environment
Testing the acquisition and use of navigation strategies in humans using a virtual environment Navigation is the area of spatial cognition related to how people move through space. Agents represent this space using reference frames fixed relative to the agent (egocentric) or the environment (allocentric). Research into how reference frames are used and interact has revealed many variables that can affect navigation. The thesis aim was to assess some of these variables and observe the important, modulatory roles of environment structure and complexity. For this a virtual Morris water maze analogue was designed to flexibly assess allocentric, intrinsic information-based and sequential response-based navigation. This research focussed on four facets of the interaction between environment and navigation: 1) How different reference systems knowledge develops over time in an environment; 2) What information drives improvements in navigation; 3) How reference systems interact when they suggest competing responses; 4) The relationship between the preceding points and environmental complexity. The results showed successful allocentric navigation after little training. Successful self-referential knowledge took longer to develop. Allocentric knowledge was centred on landmarks, overshadowing other cues, while egocentric knowledge was idiothetic. Conflict tests showed a strong preference for allocentric navigation that related to training maze complexity. A simpler training maze produced more egocentric navigators with relatively accurate route knowledge. These results provide further evidence for the multiple types of spatial navigation information that can be acquired and utilised, and demonstrate the importance of consideration of environment design for navigation research. The strong correspondence between these results and the real world navigation of human and non-human animals also suggest this virtual reality setup as a promising way to assess navigation in future
Gaze Behaviour during Space Perception and Spatial Decision Making
A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screen-shots of choice points taken in large virtual environments. Each screen-shot depicted alternative path options. In Experiment 1, participants had to decide between them in order to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 & 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making
- …