3,960 research outputs found

    Evaluating distributed cognitive resources for wayfinding in a desktop virtual environment.

    Get PDF
    As 3D interfaces, and in particular virtual environments, become increasingly realistic there is a need to investigate the location and configuration of information resources, as distributed in the humancomputer system, to support any required activities. It is important for the designer of 3D interfaces to be aware of information resource availability and distribution when considering issues such as cognitive load on the user. This paper explores how a model of distributed resources can support the design of alternative aids to virtual environment wayfinding with varying levels of cognitive load. The wayfinding aids have been implemented and evaluated in a desktop virtual environment

    For efficient navigational search, humans require full physical movement but not a rich visual scene

    Get PDF
    During navigation, humans combine visual information from their surroundings with body-based information from the translational and rotational components of movement. Theories of navigation focus on the role of visual and rotational body-based information, even though experimental evidence shows they are not sufficient for complex spatial tasks. To investigate the contribution of all three sources of information, we asked participants to search a computer generated “virtual” room for targets. Participants were provided with either only visual information, or visual supplemented with body-based information for all movement (walk group) or rotational movement (rotate group). The walk group performed the task with near-perfect efficiency, irrespective of whether a rich or impoverished visual scene was provided. The visual-only and rotate groups were significantly less efficient, and frequently searched parts of the room at least twice. This suggests full physical movement plays a critical role in navigational search, but only moderate visual detail is required

    Movement around real and virtual cluttered environments

    Get PDF
    Two experiments investigated participants’ ability to search for targets in a cluttered small-scale space. The first experiment was conducted in the real world with two field of view conditions (full vs. restricted), and participants found the task trivial to perform in both. The second experiment used the same search task but was conducted in a desktop virtual environment (VE), and investigated two movement interfaces and two visual scene conditions. Participants restricted to forward only movement performed the search task quicker and more efficiently (visiting fewer targets) than those who used an interface that allowed more flexible movement (forward, backward, left, right, and diagonal). Also, participants using a high fidelity visual scene performed the task significantly quicker and more efficiently than those who used a low fidelity scene. The performance differences between all the conditions decreased with practice, but the performance of the best VE group approached that of the real-world participants. These results indicate the importance of using high fidelity scenes in VEs, and suggest that the use of a simple control system is sufficient for maintaining ones spatial orientation during searching

    Movement around real and virtual cluttered environments

    Get PDF
    Two experiments investigated participants’ ability to search for targets in a cluttered small-scale space. The first experiment was conducted in the real world with two field of view conditions (full vs. restricted), and participants found the task trivial to perform in both. The second experiment used the same search task but was conducted in a desktop virtual environment (VE), and investigated two movement interfaces and two visual scene conditions. Participants restricted to forward only movement performed the search task quicker and more efficiently (visiting fewer targets) than those who used an interface that allowed more flexible movement (forward, backward, left, right, and diagonal). Also, participants using a high fidelity visual scene performed the task significantly quicker and more efficiently than those who used a low fidelity scene. The performance differences between all the conditions decreased with practice, but the performance of the best VE group approached that of the real-world participants. These results indicate the importance of using high fidelity scenes in VEs, and suggest that the use of a simple control system is sufficient for maintaining ones spatial orientation during searching

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    The Effects of Finger-Walking in Place (FWIP) on Spatial Knowledge Acquisition in Virtual Environments

    Get PDF
    Spatial knowledge, necessary for efficient navigation, comprises route knowledge (memory of landmarks along a route) and survey knowledge (overall representation like a map). Virtual environments (VEs) have been suggested as a power tool for understanding some issues associated with human navigation, such as spatial knowledge acquisition. The Finger-Walking-in-Place (FWIP) interaction technique is a locomotion technique for navigation tasks in immersive virtual environments (IVEs). The FWIP was designed to map a human’s embodied ability overlearned by natural walking for navigation, to finger-based interaction technique. Its implementation on Lemur and iPhone/iPod Touch devices was evaluated in our previous studies. In this paper, we present a comparative study of the joystick’s flying technique versus the FWIP. Our experiment results show that the FWIP results in better performance than the joystick’s flying for route knowledge acquisition in our maze navigation tasks

    A hybrid model for capturing implicit spatial knowledge

    Get PDF
    This paper proposes a machine learning-based approach for capturing rules embedded in users’ movement paths while navigating in Virtual Environments (VEs). It is argued that this methodology and the set of navigational rules which it provides should be regarded as a starting point for designing adaptive VEs able to provide navigation support. This is a major contribution of this work, given that the up-to-date adaptivity for navigable VEs has been primarily delivered through the manipulation of navigational cues with little reference to the user model of navigation

    Evaluation of Multi-Level Cognitive Maps for Supporting Between-Floor Spatial Behavior in Complex Indoor Environments

    Get PDF
    People often become disoriented when navigating in complex, multi-level buildings. To efficiently find destinations located on different floors, navigators must refer to a globally coherent mental representation of the multi-level environment, which is termed a multi-level cognitive map. However, there is a surprising dearth of research into underlying theories of why integrating multi-level spatial knowledge into a multi-level cognitive map is so challenging and error-prone for humans. This overarching problem is the core motivation of this dissertation. We address this vexing problem in a two-pronged approach combining study of both basic and applied research questions. Of theoretical interest, we investigate questions about how multi-level built environments are learned and structured in memory. The concept of multi-level cognitive maps and a framework of multi-level cognitive map development are provided. We then conducted a set of empirical experiments to evaluate the effects of several environmental factors on users’ development of multi-level cognitive maps. The findings of these studies provide important design guidelines that can be used by architects and help to better understand the research question of why people get lost in buildings. Related to application, we investigate questions about how to design user-friendly visualization interfaces that augment users’ capability to form multi-level cognitive maps. An important finding of this dissertation is that increasing visual access with an X-ray-like visualization interface is effective for overcoming the disadvantage of limited visual access in built environments and assists the development of multi-level cognitive maps. These findings provide important human-computer interaction (HCI) guidelines for visualization techniques to be used in future indoor navigation systems. In sum, this dissertation adopts an interdisciplinary approach, combining theories from the fields of spatial cognition, information visualization, and HCI, addressing a long-standing and ubiquitous problem faced by anyone who navigates indoors: why do people get lost inside multi-level buildings. Results provide both theoretical and applied levels of knowledge generation and explanation, as well as contribute to the growing field of real-time indoor navigation systems

    Three levels of metric for evaluating wayfinding

    Get PDF
    Three levels of virtual environment (VE) metric are proposed, based on: (1) users’ task performance (time taken, distance traveled and number of errors made), (2) physical behavior (locomotion, looking around, and time and error classification), and (3) decision making (i.e., cognitive) rationale (think aloud, interview and questionnaire). Examples of the use of these metrics are drawn from a detailed review of research into VE wayfinding. A case study from research into the fidelity that is required for efficient VE wayfinding is presented, showing the unsuitability in some circumstances of common metrics of task performance such as time and distance, and the benefits to be gained by making fine-grained analyses of users’ behavior. Taken as a whole, the article highlights the range of techniques that have been successfully used to evaluate wayfinding and explains in detail how some of these techniques may be applied
    • 

    corecore