7 research outputs found

    Combining Path Integration and Remembered Landmarks When Navigating without Vision

    Get PDF
    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.National Institutes of Health (U.S.) (Grant T32 HD007151)National Institutes of Health (U.S.) (Grant T32 EY07133)National Institutes of Health (U.S.) (Grant F32EY019622)National Institutes of Health (U.S.) (Grant EY02857)National Institutes of Health (U.S.) (Grant EY017835-01)National Institutes of Health (U.S.) (Grant EY015616-03)United States. Department of Education (H133A011903

    Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

    Get PDF
    The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent— that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features)

    Integration of vestibular and proprioceptive signals for spatial updating

    Get PDF
    Frissen I, Campos JL, Souman JL, Ernst MO. Integration of vestibular and proprioceptive signals for spatial updating. Experimental Brain Research. 2011;212(2):163-176.Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs

    Enabling Unconstrained Omnidirectional Walking Through Virtual Environments: An Overview of the CyberWalk Project

    No full text
    The CyberWalk treadmill is the first truly omnidirectional treadmill of its size that allows for near natural walking through arbitrarily large Virtual Environments. The platform represents advances in treadmill and virtual reality technology and engineering, but it is also a major step towards having a single setup that allows the study of human locomotion and its many facets. This chapter focuses on the human behavioral research that was conducted to understand human locomotion from the perspective of specifying design criteria for the CyberWalk. The first part of this chapter describes research on the biomechanics of human walking, in particular, the nature of natural unconstrained walking and the effects of treadmill walking on characteristics of gait. The second part of this chapter describes the multisensory nature of walking, with a focus on the integration of vestibular and proprioceptive information during walking. The third part of this chapter describes research on large-scale human navigation and identifies possible causes for the human tendency to veer from a straight path, and even walk in circles when no external references are made available. The chapter concludes with a summary description of the features of the CyberWalk platform that were informed by this collection of research findings and briefly highlights the current and future scientific potential for this platform
    corecore