771 research outputs found

    EEG correlates of spatial orientation in the human retrosplenial complex

    Full text link
    Š 2015 Elsevier Inc. Studies on spatial navigation reliably demonstrate that the retrosplenial complex (RSC) plays a pivotal role for allocentric spatial information processing by transforming egocentric and allocentric spatial information into the respective other spatial reference frame (SRF). While more and more imaging studies investigate the role of the RSC in spatial tasks, high temporal resolution measures such as electroencephalography (EEG) are missing. To investigate the function of the RSC in spatial navigation with high temporal resolution we used EEG to analyze spectral perturbations during navigation based on allocentric and egocentric SRF. Participants performed a path integration task in a clearly structured virtual environment providing allothetic information. Continuous EEG recordings were decomposed by independent component analysis (ICA) with subsequent source reconstruction of independent time source series using equivalent dipole modeling. Time-frequency transformation was used to investigate reference frame-specific orientation processes during navigation as compared to a control condition with identical visual input but no orientation task. Our results demonstrate that navigation based on an egocentric reference frame recruited a network including the parietal, motor, and occipital cortices with dominant perturbations in the alpha band and theta modulation in frontal cortex. Allocentric navigation was accompanied by performance-related desynchronization of the 8-13. Hz frequency band and synchronization in the 12-14. Hz band in the RSC. The results support the claim that the retrosplenial complex is central to translating egocentric spatial information into allocentric reference frames. Modulations in different frequencies with different time courses in the RSC further provide first evidence of two distinct neural processes reflecting translation of spatial information based on distinct reference frames and the computation of heading changes

    EEG analysis of visually-induced vection in left- and right-handers

    Get PDF

    Mobile brain/body imaging of landmark-based navigation with high-density EEG.

    Full text link
    Coupling behavioral measures and brain imaging in naturalistic, ecological conditions is key to comprehend the neural bases of spatial navigation. This highly integrative function encompasses sensorimotor, cognitive, and executive processes that jointly mediate active exploration and spatial learning. However, most neuroimaging approaches in humans are based on static, motion-constrained paradigms and they do not account for all these processes, in particular multisensory integration. Following the Mobile Brain/Body Imaging approach, we aimed to explore the cortical correlates of landmark-based navigation in actively behaving young adults, solving a Y-maze task in immersive virtual reality. EEG analysis identified a set of brain areas matching state-of-the-art brain imaging literature of landmark-based navigation. Spatial behavior in mobile conditions additionally involved sensorimotor areas related to motor execution and proprioception usually overlooked in static fMRI paradigms. Expectedly, we located a cortical source in or near the posterior cingulate, in line with the engagement of the retrosplenial complex in spatial reorientation. Consistent with its role in visuo-spatial processing and coding, we observed an alpha-power desynchronization while participants gathered visual information. We also hypothesized behavior-dependent modulations of the cortical signal during navigation. Despite finding few differences between the encoding and retrieval phases of the task, we identified transient time-frequency patterns attributed, for instance, to attentional demand, as reflected in the alpha/gamma range, or memory workload in the delta/theta range. We confirmed that combining mobile high-density EEG and biometric measures can help unravel the brain structures and the neural modulations subtending ecological landmark-based navigation

    Single‐trial regression of spatial exploration behavior indicates posterior EEG alpha modulation to reflect egocentric coding

    Full text link
    Learning to navigate uncharted terrain is a key cognitive ability that emerges as a deeply embodied process, with eye movements and locomotion proving most useful to sample the environment. We studied healthy human participants during active spatial learning of room-scale virtual reality (VR) mazes. In the invisible maze task, participants wearing a wireless electroencephalography (EEG) headset were free to explore their surroundings, only given the objective to build and foster a mental spatial representation of their environment. Spatial uncertainty was resolved by touching otherwise invisible walls that were briefly rendered visible inside VR, similar to finding your way in the dark. We showcase the capabilities of mobile brain/body imaging using VR, demonstrating several analysis approaches based on general linear models (GLMs) to reveal behavior-dependent brain dynamics. Confirming spatial learning via drawn sketch maps, we employed motion capture to image spatial exploration behavior describing a shift from initial exploration to subsequent exploitation of the mental representation. Using independent component analysis, the current work specifically targeted oscillations in response to wall touches reflecting isolated spatial learning events arising in deep posterior EEG sources located in the retrosplenial complex. Single-trial regression identified significant modulation of alpha oscillations by the immediate, egocentric, exploration behavior. When encountering novel walls, as well as with increasing walking distance between subsequent touches when encountering novel walls, alpha power decreased. We conclude that these oscillations play a prominent role during egocentric evidencing of allocentric spatial hypotheses

    Navigation in Real-World Environments: New Opportunities Afforded by Advances in Mobile Brain Imaging

    Get PDF
    A central question in neuroscience and psychology is how the mammalian brain represents the outside world and enables interaction with it. Significant progress on this question has been made in the domain of spatial cognition, where a consistent network of brain regions that represent external space has been identified in both humans and rodents. In rodents, much of the work to date has been done in situations where the animal is free to move about naturally. By contrast, the majority of work carried out to date in humans is static, due to limitations imposed by traditional laboratory based imaging techniques. In recent years, significant progress has been made in bridging the gap between animal and human work by employing virtual reality (VR) technology to simulate aspects of real-world navigation. Despite this progress, the VR studies often fail to fully simulate important aspects of real-world navigation, where information derived from self-motion is integrated with representations of environmental features and task goals. In the current review article, we provide a brief overview of animal and human imaging work to date, focusing on commonalties and differences in findings across species. Following on from this we discuss VR studies of spatial cognition, outlining limitations and developments, before introducing mobile brain imaging techniques and describe technical challenges and solutions for real-world recording. Finally, we discuss how these advances in mobile brain imaging technology, provide an unprecedented opportunity to illuminate how the brain represents complex multifaceted information during naturalistic navigation

    Navigation in real-world environments : new opportunities afforded by advances in mobile brain imaging

    Get PDF
    A central question in neuroscience and psychology is how the mammalian brain represents the outside world and enables interaction with it. Significant progress on this question has been made in the domain of spatial cognition, where a consistent network of brain regions that represent external space has been identified in both humans and rodents. In rodents, much of the work to date has been done in situations where the animal is free to move about naturally. By contrast, the majority of work carried out to date in humans is static, due to limitations imposed by traditional laboratory based imaging techniques. In recent years, significant progress has been made in bridging the gap between animal and human work by employing virtual reality (VR) technology to simulate aspects of real-world navigation. Despite this progress, the VR studies often fail to fully simulate important aspects of real-world navigation, where information derived from self-motion is integrated with representations of environmental features and task goals. In the current review article, we provide a brief overview of animal and human imaging work to date, focusing on commonalties and differences in findings across species. Following on from this we discuss VR studies of spatial cognition, outlining limitations and developments, before introducing mobile brain imaging techniques and describe technical challenges and solutions for real-world recording. Finally, we discuss how these advances in mobile brain imaging technology, provide an unprecedented opportunity to illuminate how the brain represents complex multifaceted information during naturalistic navigation.Publisher PDFPeer reviewe

    Brain dynamic during landmark-based learning spatial navigation

    Get PDF
    In the current study, I investigated both human behavior and brain dynamics during spatial navigation to gain a better understanding of human navigational strategies and brain signals that underlie spatial cognition. To this end, a custom-built virtual reality task and a 64-channel scalp electroencephalogram (EEG) were utilized to study participants. At the first step, we presented a novel, straightforward, yet powerful tool to evaluate individual differences during navigation, comprising of a virtual radial-arm maze inspired to the animal experiments. The virtual maze is designed and furnished, similar to an art gallery, to provide a more realistic and exciting environment for subjects’ exploration. We investigated whether a different set of instructions (explicit or implicit) affects subjects’ navigational performance, and we assessed the effect of the set of instructions on exploration strategies during both place learning and recall. We tested 42 subjects and evaluated their way-finding ability. Individual differences were assessed through the analysis of the navigational paths, which permitted the isolation and definition of a few strategies adopted by both subjects who adopted a more explicit strategy, based on explicit instructions, and an implicit strategy, based on implicit instructions. The second step aimed to explore brain dynamics and neurophysiological activity during spatial navigation. More specifically, we aimed to figure out how navigational related brain regions are connected and how their interactions and electrical activity vary according to different navigational tasks and environment. This experiment was divided into two steps: learning phase and test phase. The same virtual maze (art gallery) as the behavioral part of the study was used so that subjects to perform landmark-based navigation. The main task of the experiment was finding and memorizing the position of some goals within the environment during the learning phase and retrieving the spatial information of the goals during the test phase. We recorded EEG signals of 20 subjects during the experiment, and both scalp-level and source-level analysis approaches were employed to figure out how the brain represents the spatial location of landmarks and targets and, more precisely, how different brain regions contribute to spatial orientation and landmark-based learning during navigation

    Dissociable Influences of Auditory Object vs. Spatial Attention on Visual System Oscillatory Activity

    Get PDF
    Given that both auditory and visual systems have anatomically separate object identification (“what”) and spatial (“where”) pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory “what” vs. “where” attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG) oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic (“what”) vs. spatial (“where”) aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7–13 Hz) power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex), as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI) analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400–600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity (“what”) vs. sound location (“where”). The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during “what” vs. “where” auditory attention
    • …
    corecore