838 research outputs found
Evaluating distributed cognitive resources for wayfinding in a desktop virtual environment.
As 3D interfaces, and in particular virtual environments, become increasingly realistic there is a need to investigate the location and configuration of information resources, as distributed in the humancomputer system, to support any required activities. It is important for the designer of 3D interfaces to be aware of information resource availability and distribution when considering issues such as cognitive load on the user. This paper explores how a model of distributed resources can support the design of alternative aids to virtual environment wayfinding with varying levels of cognitive load. The wayfinding aids have been implemented and evaluated in a desktop virtual environment
Recommended from our members
Navigation and wayfinding in learning spaces in 3D virtual worlds
There is a lack of published research on the design guidelines of learning spaces in virtual worlds. Therefore, when institutions aspire to create learning spaces in Second Life, there are few studies or guidelines to inform them except for individual case studies. The Design of Learning Spaces in 3D Virtual Environments (DELVE) project, funded by the Joint Information Systems Committee in the UK, was one of the first initiatives that identified through empirical investigations the usability problems associated with learning spaces in virtual worlds and the potential impact on student experience. The findings of the DELVE project revealed that applying architectural principles of real-world designs to virtual worlds may not be sufficient. In fact, design principles from urban planning, Human–Computer Interaction (HCI), web usability, geography, and psychology influence the design of learning spaces in virtual worlds.
In DELVE, the researchers derived several usability guidelines: form should follow function, that is, that the shape of a building or object should be primarily based upon its intended function or purpose; use real-world metaphors such as mailboxes for students to leave messages, or search pods similar to real-world information kiosks; consider realism for familiarity and comfort; design for storytelling; or design to orient the user at the landing point, etc. However, the investigations in DELVE identified that the key usability problems experienced by users in 3D learning spaces are related to navigation and wayfinding.
In this chapter, we report on the Navigation and Wayfinding (NAVY) project which builds on the findings of the DELVE project. As the most commonly used virtual world for education, Second Life was the logical choice for conducting the NAVY project research. Based upon empirical investigations of a number of islands in Second Life (an island is a space which is analogous to a website in a 2D environment) involving user-based studies, heuristic evaluations, and iterative reviews of the heuristics by usability experts, we have derived over 200 guidelines for the design of learning spaces in virtual worlds.
Navigating Immersive and Interactive VR Environments With Connected 360° Panoramas
Emerging research is expanding the idea of using 360-degree spherical panoramas of real-world environments for use in 360 VR experiences beyond video and image viewing. However, most of these experiences are strictly guided, with few opportunities for interaction or exploration. There is a desire to develop experiences with cohesive virtual environments created with 360 VR that allow for choice in navigation, versus scripted experiences with limited interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing appropriate user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. To tackle this gap, we designed RealNodes, a software system that presents an interactive and explorable 360 VR environment. We also developed four visual guidance UIs for 360 VR navigation. The results of a pilot study showed that choice of UI had a significant effect on task completion times, showing one of our methods, Arrow, was best. Arrow also exhibited positive but non-significant trends in average measures with preference, user engagement, and simulator-sickness. RealNodes, the UI designs, and the pilot study results contribute preliminary information that inspire future investigation of how to design effective explorable scenarios in 360 VR and visual guidance metaphors for navigation in applications using 360 VR environments
Recommended from our members
Technological framework for ubiquitous interactions using context–aware mobile devices
This report presents research and development of dedicated system architecture, designed to enable its users to interact with each other as well as to access information on Points of Interest that exist in their immediate environment. This is accomplished through managing personal preferences and contextual information in a distributed manner and in real-time. The advantage of this system architecture is that it uses mobile devices, heterogeneous sensors and a selection of user interface paradigms to produce a sociotechnical framework to enhance the perception of the environment and promote intuitive interactions. The thrust of the work has been on software development and component integration. Iterative prototyping was adopted as a development method in order to effectively implement the users’ feedback and establish a platform for collaboration that closely meets the requirements and aids their decision-making process. The requirement acquisition was followed by the system-modelling phase in order to produce a robust software prototype. The implementation includes component-based development and extensive use of design patterns over native programming. Conclusively, the software product has become the means to evaluate differences in the use of mixed reality technologies in a ubiquitous scenario.
The prototype can query a number of context sources such as sensors, or details of the personal profile, to acquire relevant data. The data (and metadata) is stored in opensource structures, so that they are accessible at every layer of the system architecture and at any time. By proactively processing the acquired context, the system can assist the users in their tasks (e.g. navigation) without explicit input – e.g. by simply creating a gesture with the device. However, advanced interaction with the application via the user interface is available for requests that are more complex.
Representations of the real world objects, their spatial relations and other captured features of interest are visualised on scalable interfaces, ranging from 2D to 3D models and from photorealism to stylised clues and symbols. Two principal modes of operation have been implemented; one, using geo-referenced virtual reality models of the environment, updated in real time, and second, using the overlay of descriptive annotations and graphics on the video images of the surroundings, captured by a video camera. The latter is referred to as augmented reality.
The continuous feed of the device position and orientation data, from the GPS receiver and the digital compass, into the application, makes the framework fit for use in unknown environments and therefore suitable for ubiquitous operation. This is one of the novelties of the proposed framework, because it enables a whole range of social, peer-to-peer interactions to take place. The scenarios of how the system could be employed to pursue these remote interactions and collaborative efforts on mobile devices are addressed in the context of urban navigation. The conceptual design and implementation of the novel location and orientation based algorithm for mobile AR are presented in detail. The system is, however, multifaceted and capable of supporting peer-to-peer exchange of information in a pervasive fashion, usable in various contexts. The modalities of these interactions are explored and laid out in several scenarios, but particularly in the context of user adoption. Two evaluation tasks took place. The preliminary evaluation examined certain aspects that influence user interaction while being immersed in a virtual environment, whereas the second summative evaluation compared the utility and certain usability aspects of the AR and VR interfaces
The effect of landmark visualization in mobile maps on brain activity during navigation: A virtual reality study
The frequent use of GPS-based navigation assistance is found to negatively affect spatial learning. Displaying landmarks effectively while providing wayfinding instructions on such services could facilitate spatial learning because landmarks help navigators to structure and learn an environment by serving as cognitive anchors. However, simply adding landmarks on mobile maps may tax additional cognitive resources and thus adversely affect cognitive load in mobile map users during navigation. To address this potential issue, we set up the present study experimentally to investigate how the number of landmarks (i.e., 3 vs. 5 vs. 7 landmarks), displayed on a mobile map one at a time at intersections during turn-by-turn instructions, affects spatial learning, cognitive load, and visuospatial encoding during map consultation in a virtual urban environment. Spatial learning of the environment was measured using a landmark recognition test, a route direction test, and Judgements of Relative Directions (JRDs). Cognitive load and visuospatial encoding were assessed using electroencephalography (EEG) by analyzing power modulations in distinct frequency bands as well as peak amplitudes of event-related brain potentials (ERPs). Behavioral results demonstrate that landmark and route learning improve when the number of landmarks shown on a mobile map increases from three to five, but that there is no further benefit in spatial learning when depicting seven landmarks. EEG analyses show that relative theta power at fronto-central leads and P3 amplitudes at parieto-occipital leads increase in the seven-landmark condition compared to the three- and five-landmark conditions, likely indicating an increase in cognitive load in the seven-landmark condition. Visuospatial encoding indicated by greater theta ERS and alpha ERD at occipital leads with a greater number of landmarks on mobile maps. We conclude that the number of landmarks visualized when following a route can support spatial learning during map-assisted navigation but with a potential boundary—visualizing landmarks on maps benefits users’ spatial learning only when the number of visualized landmarks shown does not exceed users’ cognitive capacity. These results shed more light on neuronal correlates underlying cognitive load and visuospatial encoding during spatial learning in map-assisted navigation. Our findings also contribute to the design of neuro-adaptive landmark visualization for mobile navigation aids that aim to adapt to users’ cognitive load to optimize their spatial learning in real time
Human spatial navigation in the digital era: Effects of landmark depiction on mobile maps on navigators’ spatial learning and brain activity during assisted navigation
Navigation was an essential survival skill for our ancestors and is still a fundamental activity in our everyday lives. To stay oriented and assist navigation, our ancestors had a long history of developing and employing physical maps that communicated an enormous amount of spatial and visual information about their surroundings. Today, in the digital era, we are increasingly turning to mobile navigation devices to ease daily navigation tasks, surrendering our spatial and navigational skills to the hand-held device. On the flip side, the conveniences of such devices lead us to pay less attention to our surroundings, make fewer spatial decisions, and remember less about the surroundings we have traversed. As navigational skills and spatial memory are related to adult neurogenesis, healthy aging, education, and survival, scientists and researchers from multidisciplinary fields have made calls to develop a new account of mobile navigation assistance to preserve human navigational abilities and spatial memory.
Landmarks have been advocated for special attention in developing cognitively supportive navigation systems, as landmarks are widely accepted as key features to support spatial navigation and spatial learning of an environment. Turn-by-turn direction instructions without reference to surrounding landmarks, such as those provided by most existing navigation systems, can be one of the reasons for navigators’ spatial memory deterioration during assisted navigation. Despite the benefit of landmarks in navigation and spatial learning, long-standing literature on cognitive psychology has pointed out that individuals have only a limited cognitive capacity to process presented information for a task. When the learning items exceed learners’ capacity, the performance may reach a plateau or even drop. This leads to an unexamined yet important research question on how to visualize landmarks on a mobile map to optimize navigators’ cognitive resource exertion and thus optimize their spatial learning.
To investigate this question, I leveraged neuropsychological and hypothesis-driven approaches and investigated whether and how different numbers of landmarks depicted on a mobile map affected navigators’ spatial learning, cognitive load, and visuospatial encoding. Specifically, I set out a navigation experiment in three virtual urban environments, in which participants were asked to follow a given route to a specific destination with the aid of a mobile map. Three different numbers of landmarks—3, 5, and 7—along the given route were selected based on cognitive capacity literature and presented to 48 participants during map-assisted navigation. Their brain activity was recorded both during the phase of map consultation and during that of active locomotion. After navigation in each virtual city, their spatial knowledge of the traversed routes was assessed.
The statistical results revealed that spatial learning improved when a medium number of landmarks (i.e., five) was depicted on a mobile map compared to the lowest evaluated number (i.e., three) of landmarks, and there was no further improvement when the highest number (i.e., seven) of landmarks were provided on the mobile map. The neural correlates that were interpreted to reflect cognitive load during map consultation increased when participants were processing seven landmarks depicted on a mobile map compared to the other two landmark conditions; by contrast, the neural correlates that indicated visuospatial encoding increased with a higher number of presented landmarks. In line with the cognitive load changes during map consultation, cognitive load during active locomotion also increased when participants were in the seven-landmark condition, compared to the other two landmark conditions.
This thesis provides an exemplary paradigm to investigate navigators’ behavior and cognitive processing during map-assisted navigation and to utilize neuropsychological approaches to solve cartographic design problems. The findings contribute to a better understanding of the effects of landmark depiction (3, 5, and 7 landmarks) on navigators’ spatial learning outcomes and their cognitive processing (cognitive load and visuospatial encoding) during map-assisted navigation. Of these insights, I conclude with two main takeaways for audiences including navigation researchers and navigation system designers. First, the thesis suggests a boundary effect of the proposed benefits of landmarks in spatial learning: providing landmarks on maps benefits users’ spatial learning only to a certain extent when the number of landmarks does not increase cognitive load. Medium number (i.e., 5) of landmarks seems to be the best option in the current experiment, as five landmarks facilitate spatial learning without taxing additional cognitive resources. The second takeaway is that the increased cognitive load during map use might also spill over into the locomotion phase through the environment; thus, the locomotion phase in the environment should also be carefully considered while designing a mobile map to support navigation and environmental learning
Automatic Speed Control For Navigation in 3D Virtual Environment
As technology progresses, the scale and complexity of 3D virtual environments can also increase proportionally. This leads to multiscale virtual environments, which are environments that contain groups of objects with extremely unequal levels of scale. Ideally the user should be able to navigate such environments efficiently and robustly. Yet, most previous methods to automatically control the speed of navigation do not generalize well to environments with widely varying scales. I present an improved method to automatically control the navigation speed of the user in 3D virtual environments. The main benefit of my approach is that automatically adapts the navigation speed in multi-scale environments in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability tests show a significant reduction in the completion time for a multi-scale navigation task
An Experimental Mixed Methods Pilot Study for U.S. Army Infantry Soldiers - Higher Levels of Combined Immersion and Embodiment in Simulation-Based Training Capabilities Show Positive Effects on Emotional Impact and Relationships to Learning Outcomes
This pilot study examines the impact of combined immersion and embodiment on learning and emotional outcomes. The results are intended to better enable U.S. Army senior leaders to decide if dismounted infantry Soldiers would benefit from a more immersive simulation-based training capability. The experiment\u27s between-subject design included a sample of 15 participants randomly assigned to one of three system configurations representing different levels of combined immersion and embodiment. The control group was a typical desktop, and the two experimental groups were a typical configuration of a Virtual Reality headset (VR) and a novel configuration using VR supported by an omnidirectional treadmill (ODT) for full body exploration and interaction. Unique from similar studies, this pilot study allows for an analysis of the Infinadeck ODT\u27s impact on learning outcomes and the value of pairing tasks by type with various levels of immersion. Each condition accessed the same realistically modeled geospatial virtual environment (VE), the UCF Virtual Arboretum, and completed the same pre and post VE-interaction measurement instruments. These tests included complicated and complex information. Declarative information involved listing plants/communities native to central Florida (complicated tasks) while the situational awareness measurement required participants to draw a sketch map (complex task). The Kruskal-Wallis non-parametric statistical test showed no difference between conditions on learning outcomes. The non-parametric Spearman correlation statistical test showed many significant relationships between the system configuration and emotional outcomes. Graphical representations of the data combined with quantitative, qualitative, and correlational data suggest a larger sample size is required to increase power to answer this research question. This study found a strong trend which indicates learning outcomes are affected by task type and significant correlations between emotions important for learning outcomes increased with combined immersion and embodiment
- …