22 research outputs found

    Combining Path Integration and Remembered Landmarks When Navigating without Vision

    Get PDF
    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.National Institutes of Health (U.S.) (Grant T32 HD007151)National Institutes of Health (U.S.) (Grant T32 EY07133)National Institutes of Health (U.S.) (Grant F32EY019622)National Institutes of Health (U.S.) (Grant EY02857)National Institutes of Health (U.S.) (Grant EY017835-01)National Institutes of Health (U.S.) (Grant EY015616-03)United States. Department of Education (H133A011903

    A Single-Rate Context-Dependent Learning Process Underlies Rapid Adaptation to Familiar Object Dynamics

    Get PDF
    Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process

    Dissociating visual and interoceptive rotation during path integration

    No full text
    Path integration in darkness is classically thought to be accomplished using interoceptive information. Here we examine the contribution of visual input to the accumulation of rotational information during path integration of a return to origin task, in a fully immersive virtual environment. Nine paths that varied in turn angle and number of turns were used. Path length was approximately constant across trials. During walking the outward legs of a path, a mismatch was introduced between actual rotation and the perceived rotation of a rich virtual environment, which could be increased or decreased. The return leg of the path was performed without vision. The mismatch trials were interleaved with two control conditions; one where vision matched interoceptive information exactly on the outward paths, and one without any visual input. Sixteen subjects (balanced across gender) were tested and a mixed ANOVA analysis showed a significant effect of the visual manipulation. Return directions were consistent with the direction of the visual manipulation suggesting a strong visual component to this 'path integration' task

    Effects of path length, visual and interoceptive information on path integration

    No full text
    A number of experiments have shown that path integration in darkness can be accomplished using interoceptive information. We examine the contribution of vision to the accumulation of translational information during path integration in a return to origin task, using a fully immersive cue-rich virtual environment. Nine paths with varying lengths and numbers of turns were tested. During walking the outward legs of a path, a mis-match was introduced between actual translation and the perceived translation of the virtual environment, which was either increased or decreased. The return leg of the path was performed without vision. The mis-match trials were interleaved with two control conditions; one where vision matched interoceptive information exactly on the outward paths, and one without any visual input. ANOVA analysis on sixteen subjects showed a significant effect of the visual manipulation. Return path lengths were consistent with the visual manipulation suggesting a strong visual component to 'path integration' in this task. A separate effect of path length, depending on path type suggests that distance is underestimated for longer paths with more turns

    Systematic distortions of perceptual stability investigated using virtual reality

    No full text
    As observers walk through a 3-D environment with their gaze fixed on a static object,their retinal image of that object changes as if the object itself were rotating. We have investigated how well observers can judge whether an object is rotating when that rotation is linked with the observer's own movement. Subjects wore a head mounted display and fixated a spherical textured object at a distance of approximately 1.5m in an immersive virtual reality environment. Subjects walked from side to side (approximately ±1m). On each trial, the object rotated about a vertical axis with randomly assigned rotational gain factors within a range of ±1: a gain of +1 caused it to always face the observer; a gain of -1 caused an equal and opposite rotation; a gain of zero means the object is static in world coordinates. In a forced-choice paradigm, subjects judged the sign of the rotational gain. We found significant biases in subjects' judgements when the target object was presented in isolation. These biases varied little with viewing distance, suggesting that they were caused by an under-estimation of the distance walked. In a rich visual environment, subjects' judgements were more precise and biases were reduced. This was also true, in general, when we manipulated proprioceptive information by correlating the lateral translation of the target object with the observer's motion

    A psychophysically calibrated controller for navigating through large environments in a limited free-walking space

    No full text
    Experience indicates that the sense of presence in a virtual environment is enhanced when the participants are able to actively move through it. When exploring a virtual world by walking, the size of the model is usually limited by the size of the available tracking space. A promising way to overcome these limitations are motion compression techniques, which decouple the position in the real and virtual world by introducing imperceptible visual-proprioceptive conflicts. Such techniques usually precalculate the redirection factors, greatly reducing their robustness. We propose a novel way to determine the instantaneous rotational gains using a controller based on an optimization problem. We present a psychophysical study that measures the sensitivity of visual-proprioceptive conflicts during walking and use this to calibrate a real-time controller. We show the validity of our approach by allowing users to walk through virtual environments vastly larger than the tracking space

    Saliency based on cortex-like mechanisms

    No full text
    corecore