24,472 research outputs found

    Anterior Hippocampus and Goal-Directed Spatial Decision Making

    Get PDF
    Contains fulltext : 115487.pdf (publisher's version ) (Open Access

    Neural systems supporting navigation

    Get PDF
    Highlights: • Recent neuroimaging and electrophysiology studies have begun to shed light on the neural dynamics of navigation systems. • Computational models have advanced theories of how entorhinal grid cells and hippocampal place cells might serve navigation. • Hippocampus and entorhinal cortex provide complementary representations of routes and vectors for navigation. Much is known about how neural systems determine current spatial position and orientation in the environment. By contrast little is understood about how the brain represents future goal locations or computes the distance and direction to such goals. Recent electrophysiology, computational modelling and neuroimaging research have shed new light on how the spatial relationship to a goal may be determined and represented during navigation. This research suggests that the hippocampus may code the path to the goal while the entorhinal cortex represents the vector to the goal. It also reveals that the engagement of the hippocampus and entorhinal cortex varies across the different operational stages of navigation, such as during travel, route planning, and decision-making at waypoints

    Lost in spatial translation - A novel tool to objectively assess spatial disorientation in Alzheimer's disease and frontotemporal dementia

    Get PDF
    Spatial disorientation is a prominent feature of early Alzheimer's disease (AD) attributed to degeneration of medial temporal and parietal brain regions, including the retrosplenial cortex (RSC). By contrast, frontotemporal dementia (FTD) syndromes show generally intact spatial orientation at presentation. However, currently no clinical tasks are routinely administered to objectively assess spatial orientation in these neurodegenerative conditions. In this study we investigated spatial orientation in 58 dementia patients and 23 healthy controls using a novel virtual supermarket task as well as voxel-based morphometry (VBM). We compared performance on this task with visual and verbal memory function, which has traditionally been used to discriminate between AD and FTD. Participants viewed a series of videos from a first person perspective travelling through a virtual supermarket and were required to maintain orientation to a starting location. Analyses revealed significantly impaired spatial orientation in AD, compared to FTD patient groups. Spatial orientation performance was found to discriminate AD and FTD patient groups to a very high degree at presentation. More importantly, integrity of the RSC was identified as a key neural correlate of orientation performance. These findings confirm the notion that i) it is feasible to assess spatial orientation objectively via our novel Supermarket task; ii) impaired orientation is a prominent feature that can be applied clinically to discriminate between AD and FTD and iii) the RSC emerges as a critical biomarker to assess spatial orientation deficits in these neurodegenerative conditions

    Bottom-up retinotopic organization supports top-down mental imagery

    Get PDF
    Finding a path between locations is a routine task in daily life. Mental navigation is often used to plan a route to a destination that is not visible from the current location. We first used functional magnetic resonance imaging (fMRI) and surface-based averaging methods to find high-level brain regions involved in imagined navigation between locations in a building very familiar to each participant. This revealed a mental navigation network that includes the precuneus, retrosplenial cortex (RSC), parahippocampal place area (PPA), occipital place area (OPA), supplementary motor area (SMA), premotor cortex, and areas along the medial and anterior intraparietal sulcus. We then visualized retinotopic maps in the entire cortex using wide-field, natural scene stimuli in a separate set of fMRI experiments. This revealed five distinct visual streams or ‘fingers’ that extend anteriorly into middle temporal, superior parietal, medial parietal, retrosplenial and ventral occipitotemporal cortex. By using spherical morphing to overlap these two data sets, we showed that the mental navigation network primarily occupies areas that also contain retinotopic maps. Specifically, scene-selective regions RSC, PPA and OPA have a common emphasis on the far periphery of the upper visual field. These results suggest that bottom-up retinotopic organization may help to efficiently encode scene and location information in an eye-centered reference frame for top-down, internally generated mental navigation. This study pushes the border of visual cortex further anterior than was initially expected

    Prospective Memory in Older Adults : Where We Are Now and What Is Next

    Get PDF
    M. Kliegel acknowledges financial support from the Swiss National Science Foundation (SNSF).Peer reviewedPostprin

    The Fast and the Flexible: training neural networks to learn to follow instructions from small data

    Get PDF
    Learning to follow human instructions is a long-pursued goal in artificial intelligence. The task becomes particularly challenging if no prior knowledge of the employed language is assumed while relying only on a handful of examples to learn from. Work in the past has relied on hand-coded components or manually engineered features to provide strong inductive biases that make learning in such situations possible. In contrast, here we seek to establish whether this knowledge can be acquired automatically by a neural network system through a two phase training procedure: A (slow) offline learning stage where the network learns about the general structure of the task and a (fast) online adaptation phase where the network learns the language of a new given speaker. Controlled experiments show that when the network is exposed to familiar instructions but containing novel words, the model adapts very efficiently to the new vocabulary. Moreover, even for human speakers whose language usage can depart significantly from our artificial training language, our network can still make use of its automatically acquired inductive bias to learn to follow instructions more effectively

    Causal Confusion in Imitation Learning

    Get PDF
    Behavioral cloning reduces policy learning to supervised learning by training a discriminative model to predict expert actions given observations. Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment. We point out that ignoring causality is particularly damaging because of the distributional shift in imitation learning. In particular, it leads to a counter-intuitive "causal misidentification" phenomenon: access to more information can yield worse performance. We investigate how this problem arises, and propose a solution to combat it through targeted interventions---either environment interaction or expert queries---to determine the correct causal model. We show that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and validate our solution against DAgger and other baselines and ablations.Comment: Published at NeurIPS 2019 9 pages, plus references and appendice
    corecore