95 research outputs found

    Reference frames in allocentric representations are invariant across static and active encoding

    Get PDF
    An influential model of spatial memory the so-called reference systems account proposes that relationships between objects are biased by salient axes ("frames of reference") provided by environmental cues, such as the geometry of a room. In this study, we sought to examine the extent to which a salient environmental feature influences the formation of spatial memories when learning occurs via a single, static viewpoint and via active navigation, where information has to be integrated across multiple viewpoints. In our study, participants learned the spatial layout of an object array that was arranged with respect to a prominent environmental feature within a virtual arena. Location memory was tested using judgments of relative direction. Experiment 1A employed a design similar to previous studies whereby learning of object-location information occurred from a single, static viewpoint. Consistent with previous studies, spatial judgments were significantly more accurate when made from an orientation that was aligned, as opposed to misaligned, with the salient environmental feature. In Experiment 1B, a fresh group of participants learned the same object-location information through active exploration, which required integration of spatial information over time from a ground-level perspective. As in Experiment 1A, object-location information was organized around the salient environmental cue. Taken together, the findings suggest that the learning condition (static vs. active) does not affect the reference system employed to encode object-location information. Spatial reference systems appear to be a ubiquitous property of spatial representations, and might serve to reduce the cognitive demands of spatial processing

    Walking and encoding heading bias

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI linkPrevious studies demonstrated that physical movement enhanced spatial updating in described environments. However, those movements were executed only after the encoding of the environment, minimally affecting the development of the spatial representation. Thus, we investigated whether and how participants could benefit from the execution of physical movement during the encoding of described environments, in terms of enhanced spatial updating. Using the judgement of relative directions task, we compared the effects of walking both during and after the description of the environment, and walking only after the description on spatial updating. Spatial updating was evaluated in terms of accuracy and response times in different headings. We found that the distribution of response times across Headings seemed not to be related to the physical movement executed, whereas the distribution of accuracy scores seemed to significantly change with the action executed. Indeed, when no movement occurred during the encoding of the environment, a preference for the learning heading was found, which did not emerge when walking during encoding occurred. Therefore, the results seem to suggest that physical movement during encoding supports the development of a heading-independent representation of described environments, reducing the anchoring for a preferred heading in favor of a global representation

    The Neural Basis of Individual Differences in Directional Sense

    Get PDF
    Individuals differ greatly in their ability to learn and navigate through environments. One potential source of this variation is “directional sense” or the ability to identify, maintain, and compare allocentric headings. Allocentric headings are facing directions that are fixed to the external environment, such as cardinal directions. Measures of the ability to identify and compare allocentric headings, using photographs of familiar environments, have shown significant individual and strategy differences; however, the neural basis of these differences is unclear. Forty-five college students, who were highly familiar with a campus environment and ranged in self-reported sense-of-direction, underwent fMRI scans while they completed the Relative Heading task, in which they had to indicate the direction of a series of photographs of recognizable campus buildings (i.e., “target headings”) with respect to initial “orienting headings.” Large individual differences were found in accuracy and correct decision latencies, with gender, self-reported sense-of-direction, and familiarity with campus buildings all predicting task performance. Using linear mixed models, the directional relationships between headings and the experiment location also impacted performance. Structural scans revealed that lateral orbitofrontal and superior parietal volume were related to task accuracy and decision latency, respectively. Bilateral hippocampus and right presubiculum volume were related to self-reported sense-of-direction. Meanwhile, functional results revealed clusters within the superior parietal lobule, supramarginal gyrus, superior frontal gyrus, lateral orbitofrontal cortex, and caudate among others in which the intensity of activation matched the linear magnitude of the difference between the orienting and target headings. While the retrosplenial cortex and hippocampus have previously been implicated in the coding of allocentric headings, this work revealed that comparing those headings additionally involved frontal and parietal regions. These results provide insights into the neural bases of the variation within human orientation abilities, and ultimately, human navigation

    Spatial memory for vertical locations

    Get PDF
    Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one based on the body vertical, the visual-room vertical, or the direction of gravity. Participants judged interobject spatial relationships learned from a vertical layout in a virtual room. During learning and testing, we varied the orientation of the participant’s body (upright vs. lying sideways) and the visually presented room relative to gravity (e.g., rotated by 90° along the frontal plane). Across all experiments, participants made quicker or more accurate judgments when the room was oriented in the same way as during learning with respect to their body, irrespective of their orientations relative to gravity. This suggests that participants employed an egocentric body-based reference frame for representing vertical object locations. Our study also revealed an effect of body–gravity alignment during testing. Participants recalled spatial relations more accurately when upright, regardless of the body and visual-room orientation during learning. This finding is consistent with a hypothesis of selection conflict between different reference frames. Overall, our results suggest that a body-based reference frame is preferred over salient allocentric reference frames in memory for vertical locations perceived from a single view. Further, memory of vertical space seems to be tuned to work best in the default upright body orientation

    Neural Representations of a Real-World Environment

    Get PDF
    The ability to represent the spatial structure of the environment is critical for successful navigation. Extensive research using animal models has revealed the existence of specialized neurons that appear to code for spatial information in their firing patterns. However, little is known about which regions of the human brain support representations of large-scale space. To address this gap in the literature, we performed three functional magnetic resonance imaging (fMRI) experiments aimed at characterizing the representations of locations, headings, landmarks, and distances in a large environment for which our subjects had extensive real-world navigation experience: their college campus. We scanned University of Pennsylvania students while they made decisions about places on campus and then tested for spatial representations using multivoxel pattern analysis and fMRI adaptation. In Chapter 2, we tested for representations of the navigator\u27s current location and heading, information necessary for self-localization. In Chapter 3, we tested whether these location and heading representations were consistent across perception and spatial imagery. Finally, in Chapter 4, we tested for representations of landmark identity and the distances between landmarks. Across the three experiments, we observed that specific regions of medial temporal and medial parietal cortex supported long-term memory representations of navigationally-relevant spatial information. These results serve to elucidate the functions of these regions and offer a framework for understanding the relationship between spatial representations in the medial temporal lobe and in high-level visual regions. We discuss our findings in the context of the broader spatial cognition literature, including implications for studies of both humans and animal models

    Blind Sailors' Spatial Representation Using an On-Board Force Feedback Arm: Two Case Studies

    No full text
    International audienceUsing a vocal, auditory and haptic application designed for maritime navigation, blind sailors are able to set up and manage their voyages. However, investigation of the manner to present information remains a crucial issue to better understand spatial cognition and improve navigation without vision. In this study, we asked two participants to use SeaTouch on board and manage the ship headings during navigation in order to follow a predefined itinerary. Two conditions were tested. Firstly, blind sailors consulted the updated ship positions on the virtual map presented in an allocentric frame of reference (i.e. facing north). In the second case, they used the forced-feedback device in an egocentric frame of reference (i.e. facing the ship headings). Spatial performance tended to show that the egocentric condition was better for controlling the course during displacement, whereas the allocentric condition was more efficient for building mental representation and remembering it after the navigation task

    The Representational Foundations of Updating Object Locations

    Get PDF

    Anchoring The Cognitive Map To The Visual World

    Get PDF
    To interact rapidly and effectively with the environment, the mammalian brain needs a representation of the spatial layout of the external world (or a “cognitive map”). A person might need to know where she is standing to find her way home, for instance, or might need to know where she is looking to reach for her out-of-sight keys. For many behaviors, however, simply possessing a map is not enough; in order for a map to be useful in a dynamic world, it must be anchored to stable environmental cues. The goal of the present research is to address this spatial anchoring problem in two different domains: navigation and vision. In the first part of the thesis, which comprises Chapters 1-3, we examine how navigators use perceptual information to re-anchor their cognitive map after becoming lost, a process known as spatial reorientation. Using a novel behavioral paradigm with rodents, in Chapter 2 we show that the cognitive map is reoriented by dissociable inputs for identifying where one is and recovering which way one is facing. The findings presented in Chapter 2 also highlight the importance of environmental boundaries, such as the walls of a room, for anchoring the cognitive map. We thus predicted that there might exist a brain region that is selectively involved in boundary perception during navigation. Accordingly, in Chapter 3, we combine transcranial magnetic stimulation and virtual-reality navigation to reveal the existence of such a boundary perception region in humans. In the second part of this thesis, Chapter 4, we explore whether the same mechanisms that support the cognitive map of navigational space also mediate a map of visual space (i.e., where one is looking). Using functional magnetic resonance imaging and eye tracking, we show that human entorhinal cortex supports a map-like representation of visual space that obeys the same principles of boundary-anchoring previously observed in rodent maps of navigational space. Together, this research elucidates how mental maps are anchored to the world, thus allowing the mammalian brain to form durable spatial representations across body and eye movements

    Online or offline? Exploring working memory constraints in spatial updating

    Get PDF
    All spatial representation theories rely upon two spatial updating processes in order to maintain spatially consistent self-to-object relationships: movement-driven, automatic online updating and offline, conscious mental transformations of perspective. Theoretical differentiation based on offline updating is difficult given the equivalent predictions for many of the spatial tasks commonly used (i.e. egocentric pointing). However, representational theories do diverge with respect to the predicted working memory constraints of online updating. In experiment one participants studied groups of 4, 6, & 8 targets, engaged in a 180Âş rotation on half the trials and completed a series of judgments of relative direction and egocentric pointing. Set size effects for both tasks were limited to latencies alone, suggesting offline updating. Experiment two had participants study smaller (3 target) configurations and make egocentric pointing responses. On half the trials, participants engaged in either a verbal or spatial 1-back task during both retention and rotation periods. No effect of dual-task load or type were found for egocentric pointing. Both latencies and errors were significantly greater for the post-rotation pointing suggesting offline updating. The lack of evidence for online updating is surprising and contrary to previous findings that it is an obligatory automatic process (e.g. Farrell & Thomson, 1998). Multiple models were developed within ACT-R/S (Harrison & Schunn, 2003) illustrating the sufficiency of offline updating to account for the current findings. The challenges of detecting online updating and investigating its working memory constraints are discussed in the light of these results and simulations

    A wayfinding aid to increase navigator independence

    Get PDF
    Wayfinding aids are of great benefit because users do not have to rely on their learned geographic knowledge or orientation skills alone for successful navigation. Additionally cognitive resources usually captured by this activity can be spent elsewhere. A challenge however remains for wayfinding aid developers. Due to the automation of wayfinding aids navigator independence may be decreasing via the use of these aids. In order to address this wayfinding aids might be improved additionally to perform a training role. Since the most versatile wayfinders appear to deploy a dual strategy for geographic orientation it is proposed that wayfinding aids be improved to foster such an approach. This paper presents the results of an experimental study testing a portion of the suggested enhancement
    • …
    corecore