230 research outputs found

    Grid-cell representations in mental simulation

    Get PDF
    Anticipating the future is a key motif of the brain, possibly supported by mental simulation of upcoming events. Rodent single-cell recordings suggest the ability of spatially tuned cells to represent subsequent locations. Grid-like representations have been observed in the human entorhinal cortex during virtual and imagined navigation. However, hitherto it remains unknown if grid-like representations contribute to mental simulation in the absence of imagined movement. Participants imagined directions between building locations in a large-scale virtual-reality city while undergoing fMRI without re-exposure to the environment. Using multi-voxel pattern analysis, we provide evidence for representations of absolute imagined direction at a resolution of 30° in the parahippocampal gyrus, consistent with the head-direction system. Furthermore, we capitalize on the six-fold rotational symmetry of grid-cell firing to demonstrate a 60° periodic pattern-similarity structure in the entorhinal cortex. Our findings imply a role of the entorhinal grid-system in mental simulation and future thinking beyond spatial navigation

    Neural Representations of a Real-World Environment

    Get PDF
    The ability to represent the spatial structure of the environment is critical for successful navigation. Extensive research using animal models has revealed the existence of specialized neurons that appear to code for spatial information in their firing patterns. However, little is known about which regions of the human brain support representations of large-scale space. To address this gap in the literature, we performed three functional magnetic resonance imaging (fMRI) experiments aimed at characterizing the representations of locations, headings, landmarks, and distances in a large environment for which our subjects had extensive real-world navigation experience: their college campus. We scanned University of Pennsylvania students while they made decisions about places on campus and then tested for spatial representations using multivoxel pattern analysis and fMRI adaptation. In Chapter 2, we tested for representations of the navigator\u27s current location and heading, information necessary for self-localization. In Chapter 3, we tested whether these location and heading representations were consistent across perception and spatial imagery. Finally, in Chapter 4, we tested for representations of landmark identity and the distances between landmarks. Across the three experiments, we observed that specific regions of medial temporal and medial parietal cortex supported long-term memory representations of navigationally-relevant spatial information. These results serve to elucidate the functions of these regions and offer a framework for understanding the relationship between spatial representations in the medial temporal lobe and in high-level visual regions. We discuss our findings in the context of the broader spatial cognition literature, including implications for studies of both humans and animal models

    Is navigation in virtual reality with fMRI really navigation?

    Get PDF
    Identifying the neural mechanisms underlying spatial orientation and navigation has long posed a challenge for researchers. Multiple approaches incorporating a variety of techniques and animal models have been used to address this issue. More recently, virtual navigation has become a popular tool for understanding navigational processes. Although combining this technique with functional imaging can provide important information on many aspects of spatial navigation, it is important to recognize some of the limitations these techniques have for gaining a complete understanding of the neural mechanisms of navigation. Foremost among these is that, when participants perform a virtual navigation task in a scanner, they are lying motionless in a supine position while viewing a video monitor. Here, we provide evidence that spatial orientation and navigation rely to a large extent on locomotion and its accompanying activation of motor, vestibular, and proprioceptive systems. Researchers should therefore consider the impact on the absence of these motion-based systems when interpreting virtual navigation/functional imaging experiments to achieve a more accurate understanding of the mechanisms underlying navigation. © 2013 Massachusetts Institute of Technology

    Understanding space by moving through it: neural networks of motion- and space processing in humans

    Get PDF
    Humans explore the world by moving in it, whether moving their whole body as during walking or driving a car, or moving their arm to explore the immediate environment. During movement, self-motion cues arise from the sensorimotor system comprising vestibular, proprioceptive, visual and motor cues, which provide information about direction and speed of the movement. Such cues allow the body to keep track of its location while it moves through space. Sensorimotor signals providing self-motion information can therefore serve as a source for spatial processing in the brain. This thesis is an inquiry into human brain systems of movement and motion processing in a number of different sensory and motor modalities using functional magnetic resonance imaging (fMRI). By characterizing connections between these systems and the spatial representation system in the brain, this thesis investigated how humans understand space by moving through it. In the first study of this thesis, the recollection networks of whole-body movement were explored. Brain activation was measured during the retrieval of active and passive self-motion and retrieval of observing another person performing these tasks. Primary sensorimotor areas dominated the recollection network of active movement, while higher association areas in parietal and mid-occipital cortex were recruited during the recollection of passive transport. Common to both self-motion conditions were bilateral activations in the posterior medial temporal lobe (MTL). No MTL activations were observed during recollection of movement observation. Considering that on a behavioral level, both active and passive self-motion provide sufficient information for spatial estimations, the common activation in MTL might represent the common physiological substrate for such estimations. The second study investigated processing in the 'parahippocampal place area' (PPA), a region in the posterior MTL, during haptic exploration of spatial layout. The PPA in known to respond strongly to visuo-spatial layout. The study explored if this region is processing visuo-spatial layout specifically or spatial layout in general, independent from the encoding sensory modality. In both a cohort of sighted and blind participants, activation patterns in PPA were measured while participants haptically explored the spatial layout of model scenes or the shape of information-matched objects. Both in sighted and blind individuals, PPA activity was greater during layout exploration than during object-shape exploration. While PPA activity in the sighted could also be caused by a transformation of haptic information into a mental visual image of the layout, two points speak against this: Firstly, no increase in connectivity between the visual cortex and the PPA were observed, which would be expected if visual imagery took place. Secondly, blind participates, who cannot resort to visual imagery, showed the same pattern of PPA activity. Together, these results suggest that the PPA processes spatial layout information independent from the encoding modality. The third and last study addressed error accumulation in motion processing on different levels of the visual system. Using novel analysis methods of fMRI data, possible links between physiological properties in hMT+ and V1 and inter-individual differences in perceptual performance were explored. A correlation between noise characteristics and performance score was found in hMT+ but not V1. Better performance correlated with greater signal variability in hMT+. Though neurophysiological variability is traditionally seen as detrimental for behavioral accuracy, the results of this thesis contribute to the increasing evidence which suggests the opposite: that more efficient processing under certain circumstances can be related to more noise in neurophysiological signals. In summary, the results of this doctoral thesis contribute to our current understanding of motion and movement processing in the brain and its interface with spatial processing networks. The posterior MTL appears to be a key region for both self-motion and spatial processing. The results further indicate that physiological characteristics on the level of category-specific processing but not primary encoding reflect behavioral judgments on motion. This thesis also makes methodological contributions to the field of neuroimaging: it was found that the analysis of signal variability is a good gauge for analysing inter-individual physiological differences, while superior head-movement correction techniques have to be developed before pattern classification can be used to this end

    The hippocampus and spatial constraints on mental imagery

    Get PDF
    We review a model of imagery and memory retrieval based on allocentric spatial representation by place cells and boundary vector cells (BVCs) in the medial temporal lobe, and their translation into egocentric images in retrosplenial and parietal areas. In this model, the activity of place cells constrain the contents of imagery and retrieval to be coherent and consistent with the subject occupying a single location, while the activity of head-direction cells along Papez's circuit determine the viewpoint direction for which the egocentric image is generated. An extension of this model is discussed in which a role for grid cells in dynamic updating of representations (mental navigation) is included. We also discuss the extension of this model to implement a version of the dual representation theory of post-traumatic stress disorder (PTSD) in which PTSD arises from an imbalance between weak allocentric hippocampal-mediated contextual representations and strong affective/sensory representations. The implications of these models for behavioral, neuropsychological, and neuroimaging data in humans are explored

    A neural-level model of spatial memory and imagery

    Get PDF
    We present a model of how neural representations of egocentric spatial experiences in parietal cortex interface with viewpoint-independent representations in medial temporal areas, via retrosplenial cortex, to enable many key aspects of spatial cognition. This account shows how previously reported neural responses (place, head-direction and grid cells, allocentric boundary- and object-vector cells, gain-field neurons) can map onto higher cognitive function in a modular way, and predicts new cell types (egocentric and head-direction-modulated boundary- and object-vector cells). The model predicts how these neural populations should interact across multiple brain regions to support spatial memory, scene construction, novelty-detection, 'trace cells', and mental navigation. Simulated behavior and firing rate maps are compared to experimental data, for example showing how object-vector cells allow items to be remembered within a contextual representation based on environmental boundaries, and how grid cells could update the viewpoint in imagery during planning and short-cutting by driving sequential place cell activity

    Electrophysiological Signatures of Spatial Boundaries in the Human Subiculum.

    Get PDF
    Environmental boundaries play a crucial role in spatial navigation and memory across a wide range of distantly related species. In rodents, boundary representations have been identified at the single-cell level in the subiculum and entorhinal cortex of the hippocampal formation. Although studies of hippocampal function and spatial behavior suggest that similar representations might exist in humans, boundary-related neural activity has not been identified electrophysiologically in humans until now. To address this gap in the literature, we analyzed intracranial recordings from the hippocampal formation of surgical epilepsy patients (of both sexes) while they performed a virtual spatial navigation task and compared the power in three frequency bands (1-4, 4-10, and 30-90 Hz) for target locations near and far from the environmental boundaries. Our results suggest that encoding locations near boundaries elicited stronger theta oscillations than for target locations near the center of the environment and that this difference cannot be explained by variables such as trial length, speed, movement, or performance. These findings provide direct evidence of boundary-dependent neural activity localized in humans to the subiculum, the homolog of the hippocampal subregion in which most boundary cells are found in rodents, and indicate that this system can represent attended locations that rather than the position of one\u27s own body

    Navigation in Real-World Environments: New Opportunities Afforded by Advances in Mobile Brain Imaging

    Get PDF
    A central question in neuroscience and psychology is how the mammalian brain represents the outside world and enables interaction with it. Significant progress on this question has been made in the domain of spatial cognition, where a consistent network of brain regions that represent external space has been identified in both humans and rodents. In rodents, much of the work to date has been done in situations where the animal is free to move about naturally. By contrast, the majority of work carried out to date in humans is static, due to limitations imposed by traditional laboratory based imaging techniques. In recent years, significant progress has been made in bridging the gap between animal and human work by employing virtual reality (VR) technology to simulate aspects of real-world navigation. Despite this progress, the VR studies often fail to fully simulate important aspects of real-world navigation, where information derived from self-motion is integrated with representations of environmental features and task goals. In the current review article, we provide a brief overview of animal and human imaging work to date, focusing on commonalties and differences in findings across species. Following on from this we discuss VR studies of spatial cognition, outlining limitations and developments, before introducing mobile brain imaging techniques and describe technical challenges and solutions for real-world recording. Finally, we discuss how these advances in mobile brain imaging technology, provide an unprecedented opportunity to illuminate how the brain represents complex multifaceted information during naturalistic navigation

    Anchoring The Cognitive Map To The Visual World

    Get PDF
    To interact rapidly and effectively with the environment, the mammalian brain needs a representation of the spatial layout of the external world (or a “cognitive map”). A person might need to know where she is standing to find her way home, for instance, or might need to know where she is looking to reach for her out-of-sight keys. For many behaviors, however, simply possessing a map is not enough; in order for a map to be useful in a dynamic world, it must be anchored to stable environmental cues. The goal of the present research is to address this spatial anchoring problem in two different domains: navigation and vision. In the first part of the thesis, which comprises Chapters 1-3, we examine how navigators use perceptual information to re-anchor their cognitive map after becoming lost, a process known as spatial reorientation. Using a novel behavioral paradigm with rodents, in Chapter 2 we show that the cognitive map is reoriented by dissociable inputs for identifying where one is and recovering which way one is facing. The findings presented in Chapter 2 also highlight the importance of environmental boundaries, such as the walls of a room, for anchoring the cognitive map. We thus predicted that there might exist a brain region that is selectively involved in boundary perception during navigation. Accordingly, in Chapter 3, we combine transcranial magnetic stimulation and virtual-reality navigation to reveal the existence of such a boundary perception region in humans. In the second part of this thesis, Chapter 4, we explore whether the same mechanisms that support the cognitive map of navigational space also mediate a map of visual space (i.e., where one is looking). Using functional magnetic resonance imaging and eye tracking, we show that human entorhinal cortex supports a map-like representation of visual space that obeys the same principles of boundary-anchoring previously observed in rodent maps of navigational space. Together, this research elucidates how mental maps are anchored to the world, thus allowing the mammalian brain to form durable spatial representations across body and eye movements

    Hippocampal-entorhinal codes for space, time and cognition

    Get PDF
    Contains fulltext : 207524.pdf (publisher's version ) (Open Access)Radboud University, 9 oktober 2019Promotor : Doeller, C.F. Co-promotor : Deuker, Lorena225 p
    corecore