4,472 research outputs found

    Movement around real and virtual cluttered environments

    Get PDF
    Two experiments investigated participants’ ability to search for targets in a cluttered small-scale space. The first experiment was conducted in the real world with two field of view conditions (full vs. restricted), and participants found the task trivial to perform in both. The second experiment used the same search task but was conducted in a desktop virtual environment (VE), and investigated two movement interfaces and two visual scene conditions. Participants restricted to forward only movement performed the search task quicker and more efficiently (visiting fewer targets) than those who used an interface that allowed more flexible movement (forward, backward, left, right, and diagonal). Also, participants using a high fidelity visual scene performed the task significantly quicker and more efficiently than those who used a low fidelity scene. The performance differences between all the conditions decreased with practice, but the performance of the best VE group approached that of the real-world participants. These results indicate the importance of using high fidelity scenes in VEs, and suggest that the use of a simple control system is sufficient for maintaining ones spatial orientation during searching

    Movement around real and virtual cluttered environments

    Get PDF
    Two experiments investigated participants’ ability to search for targets in a cluttered small-scale space. The first experiment was conducted in the real world with two field of view conditions (full vs. restricted), and participants found the task trivial to perform in both. The second experiment used the same search task but was conducted in a desktop virtual environment (VE), and investigated two movement interfaces and two visual scene conditions. Participants restricted to forward only movement performed the search task quicker and more efficiently (visiting fewer targets) than those who used an interface that allowed more flexible movement (forward, backward, left, right, and diagonal). Also, participants using a high fidelity visual scene performed the task significantly quicker and more efficiently than those who used a low fidelity scene. The performance differences between all the conditions decreased with practice, but the performance of the best VE group approached that of the real-world participants. These results indicate the importance of using high fidelity scenes in VEs, and suggest that the use of a simple control system is sufficient for maintaining ones spatial orientation during searching

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    Movement in cluttered virtual environments

    Get PDF
    Imagine walking around a cluttered room but then having little idea of where you have traveled. This frequently happens when people move around small virtual environments (VEs), searching for targets. In three experiments, participants searched small-scale VEs using different movement interfaces, collision response algorithms, and fields of view. Participants' searches were most efficient in terms of distance traveled, time taken, and path followed when the simplest form of movement (view direction) was used in conjunction with a response algorithm that guided ("slipped") them around obstacles when collisions occurred. Unexpectedly, and in both immersive and desktop VEs, participants often had great difficulty finding the targets, despite the fact that participants could see the whole VE if they stood in one place and turned around. Thus, the trivial real-world task used in the present study highlights a basic problem with current VE systems

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Changes in navigational behaviour produced by a wide field of view and a high fidelity visual scene

    Get PDF
    The difficulties people frequently have navigating in virtual environments (VEs) are well known. Usually these difficulties are quantified in terms of performance (e.g., time taken or number of errors made in following a path), with these data used to compare navigation in VEs to equivalent real-world settings. However, an important cause of any performance differences is changes in people’s navigational behaviour. This paper reports a study that investigated the effect of visual scene fidelity and field of view (FOV) on participants’ behaviour in a navigational search task, to help identify the thresholds of fidelity that are required for efficient VE navigation. With a wide FOV (144 degrees), participants spent significantly larger proportion of their time travelling through the VE, whereas participants who used a normal FOV (48 degrees) spent significantly longer standing in one place planning where to travel. Also, participants who used a wide FOV and a high fidelity scene came significantly closer to conducting the search "perfectly" (visiting each place once). In an earlier real-world study, participants completed 93% of their searches perfectly and planned where to travel while they moved. Thus, navigating a high fidelity VE with a wide FOV increased the similarity between VE and real-world navigational behaviour, which has important implications for both VE design and understanding human navigation. Detailed analysis of the errors that participants made during their non-perfect searches highlighted a dramatic difference between the two FOVs. With a narrow FOV participants often travelled right past a target without it appearing on the display, whereas with the wide FOV targets that were displayed towards the sides of participants overall FOV were often not searched, indicating a problem with the demands made by such a wide FOV display on human visual attention

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Three levels of metric for evaluating wayfinding

    Get PDF
    Three levels of virtual environment (VE) metric are proposed, based on: (1) users’ task performance (time taken, distance traveled and number of errors made), (2) physical behavior (locomotion, looking around, and time and error classification), and (3) decision making (i.e., cognitive) rationale (think aloud, interview and questionnaire). Examples of the use of these metrics are drawn from a detailed review of research into VE wayfinding. A case study from research into the fidelity that is required for efficient VE wayfinding is presented, showing the unsuitability in some circumstances of common metrics of task performance such as time and distance, and the benefits to be gained by making fine-grained analyses of users’ behavior. Taken as a whole, the article highlights the range of techniques that have been successfully used to evaluate wayfinding and explains in detail how some of these techniques may be applied
    • …
    corecore