17,277 research outputs found

    Perspective Switching in Virtual Environments

    Get PDF
    When exploring new environments, people regularly alternate among many sources of spatial information including direct visual input, navigation aids such as maps and mobile devices, and verbal route descriptions. These spatial representations typically depict the environment from one of two perspectives: first-person, embedded route perspective or top-down, bird's eye survey perspective. Visual spatial cognition research has explored the nature of learning within each of these perspectives independently, but little work has been done to explore how on-line visual processing of combined perspectives affects cognition, meaning there is little understanding of the cognitive costs of using different navigation tools to learn large-scale environments. This dissertation addresses such questions through two experiments that guide participants through simple paths in large-scale environments, each consisting of a simple path through a small virtual town presented on a desktop computer display. By timing participants' movement through each environment and how they respond to either externally-controlled or participant-controlled perspective switches, the experiments measure the cognitive load of visually processing dynamic perspectives during navigation. These on-line processing measures are complemented by tests of visual recognition and recall memory, which reveal how switching perspectives affects the accuracy of the resulting spatial mental model. The results indicate that the cognitive load associated with changing perspectives is primarily dependent on the quantity of visual information the change introduces -- the transformation itself is not particularly disorienting after the first exposure to the environment. Furthermore, although forced perspective switches do not appear to significantly affect spatial memory accuracy relative to viewing the environment from a consistent perspective, navigator-controlled switching results in significantly more accurate spatial memory, indicating that navigation aids which allow for perspective control might better support spatial learning than fixed-perspective interfaces. The findings also support previous research showing that route perspective navigation generally yields more accurate spatial memory than survey perspective learning, particularly after extensive experience in the environment. Overall, the findings demonstrate many new aspects of how perspective affects spatial cognition, with implications for spatial learning and the design of navigation aids

    Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays?

    Get PDF
    Participants used a helmet-mounted display (HMD) and a desk-top (monitor) display to learn the layouts of two large-scale virtual environments (VEs) through repeated, direct navigational experience. Both VEs were ā€˜ā€˜virtual buildingsā€™ā€™ containing more than seventy rooms. Participants using the HMD navigated the buildings significantly more quickly and developed a significantly more accurate sense of relative straight-line distance. There was no significant difference between the two types of display in terms of the distance that participants traveled or the mean accuracy of their direction estimates. Behavioral analyses showed that participants took advantage of the natural, head-tracked interface provided by the HMD in ways that included ā€˜ā€˜looking aroundā€™ā€™more often while traveling through the VEs, and spending less time stationary in the VEs while choosing a direction in which to travel

    Navigating large-scale ā€˜ā€˜desk-topā€™ā€™ virtual buildings: effects of orientation aids and familiarity

    Get PDF
    Two experiments investigated components of participantsā€™ spatial knowledge when they navigated large-scale ā€˜ā€˜virtual buildingsā€™ā€™ using ā€˜ā€˜desk-topā€™ā€™ (i.e., nonimmersive) virtual environments (VEs). Experiment 1 showed that participants could estimate directions with reasonable accuracy when they traveled along paths that contained one or two turns (changes of direction), but participantsā€™ estimates were significantly less accurate when the paths contained three turns. In Experiment 2 participants repeatedly navigated two more complex virtual buildings, one with and the other without a compass. The accuracy of participantsā€™ route-finding and their direction and relative straight-line distance estimates improved with experience, but there were no significant differences between the two compass conditions. However, participants did develop significantly more accurate spatial knowledge as they became more familiar with navigating VEs in general

    Control of virtual environments for young people with learning difficulties

    Get PDF
    Purpose: The objective of this research is to identify the requirements for the selection or development of usable virtual environment (VE) interface devices for young people with learning disabilities. Method: a user-centred design methodology was employed, to produce a design specification for usable VE interface devices. Details of the users' cognitive, physical and perceptual abilities were obtained through observation and normative assessment tests. Conclusions : A review of computer interface technology, including virtual reality and assistive devices, was conducted. As there were no devices identified that met all the requirements of the design specification, it was concluded that there is a need for the design and development of new concepts. Future research will involve concept and prototype development and user-based evaluation of the prototypes

    For efficient navigational search, humans require full physical movement but not a rich visual scene

    Get PDF
    During navigation, humans combine visual information from their surroundings with body-based information from the translational and rotational components of movement. Theories of navigation focus on the role of visual and rotational body-based information, even though experimental evidence shows they are not sufficient for complex spatial tasks. To investigate the contribution of all three sources of information, we asked participants to search a computer generated ā€œvirtualā€ room for targets. Participants were provided with either only visual information, or visual supplemented with body-based information for all movement (walk group) or rotational movement (rotate group). The walk group performed the task with near-perfect efficiency, irrespective of whether a rich or impoverished visual scene was provided. The visual-only and rotate groups were significantly less efficient, and frequently searched parts of the room at least twice. This suggests full physical movement plays a critical role in navigational search, but only moderate visual detail is required

    Using Wii technology to explore real spaces via virtual environments for people who are blind

    Get PDF
    Purpose - Virtual environments (VEs) that represent real spaces (RSs) give people who are blind the opportunity to build a cognitive map in advance that they will be able to use when arriving at the RS. Design - In this research study Nintendo Wii based technology was used for exploring VEs via the Wiici application. The Wiimote allows the user to interact with VEs by simulating walking and scanning the space. Finding - By getting haptic and auditory feedback the user learned to explore new spaces. We examined the participants' abilities to explore new simple and complex places, construct a cognitive map, and perform orientation tasks in the RS. Originality ā€“ To our knowledge, this finding presents the first virtual environment for people who are blind that allow the participants to scan the environment and by this to construct map model spatial representations

    Effects of spatial ability on multi-robot control tasks

    Get PDF
    Working with large teams of robots is a very complex and demanding task for any operator and individual differences in spatial ability could significantly affect that performance. In the present study, we examine data from two earlier experiments to investigate the effects of ability for perspective-taking on performance at an urban search and rescue (USAR) task using a realistic simulation and alternate displays. We evaluated the participants' spatial ability using a standard measure of spatial orientation and examined the divergence of performance in accuracy and speed in locating victims, and perceived workload. Our findings show operators with higher spatial ability experienced less workload and marked victims more precisely. An interaction was found for the experimental image queue display for which participants with low spatial ability improved significantly in their accuracy in marking victims over the traditional streaming video display. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Electrophysiological correlates of high-level perception during spatial navigation

    Get PDF
    We studied the electrophysiological basis of object recognition by recording scalp\ud electroencephalograms while participants played a virtual-reality taxi driver game.\ud Participants searched for passengers and stores during virtual navigation in simulated\ud towns. We compared oscillatory brain activity in response to store views that were targets or\ud nontargets (during store search) or neutral (during passenger search). Even though store\ud category was solely defined by task context (rather than by sensory cues), frontal ...\ud \u

    A Content-Analysis Approach for Exploring Usability Problems in a Collaborative Virtual Environment

    Get PDF
    As Virtual Reality (VR) products are becoming more widely available in the consumer market, improving the usability of these devices and environments is crucial. In this paper, we are going to introduce a framework for the usability evaluation of collaborative 3D virtual environments based on a large-scale usability study of a mixedmodality collaborative VR system. We first review previous literature about important usability issues related to collaborative 3D virtual environments, supplemented with our research in which we conducted 122 interviews after participants solved a collaborative virtual reality task. Then, building on the literature review and our results, we extend previous usability frameworks. We identified twelve different usability problems, and based on the causes of the problems, we grouped them into three main categories: VR environment-, device interaction-, and task-specific problems. The framework can be used to guide the usability evaluation of collaborative VR environments

    Reference Resolution in Multi-modal Interaction: Position paper

    Get PDF
    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems and how they are treated for different contexts. No generally applicable solutions are given
    • ā€¦
    corecore