718 research outputs found

    Neural processes underpinning episodic memory

    Get PDF
    Episodic memory is the memory for our personal past experiences. Although numerous functional magnetic resonance imaging (fMRI) studies investigating its neural basis have revealed a consistent and distributed network of associated brain regions, surprisingly little is known about the contributions individual brain areas make to the recollective experience. In this thesis I address this fundamental issue by employing a range of different experimental techniques including neuropsychological testing, virtual reality environments, whole brain and high spatial resolution fMRI, and multivariate pattern analysis. Episodic memory recall is widely agreed to be a reconstructive process, one that is known to be critically reliant on the hippocampus. I therefore hypothesised that the same neural machinery responsible for reconstruction might also support ‘constructive’ cognitive functions such as imagination. To test this proposal, patients with focal damage to the hippocampus bilaterally were asked to imagine new experiences and were found to be impaired relative to matched control participants. Moreover, driving this deficit was a lack of spatial coherence in their imagined experiences, pointing to a role for the hippocampus in binding together the disparate elements of a scene. A subsequent fMRI study involving healthy participants compared the recall of real memories with the construction of imaginary memories. This revealed a fronto-temporo-parietal network in common to both tasks that included the hippocampus, ventromedial prefrontal, retrosplenial and parietal cortices. Based on these results I advanced the notion that this network might support the process of ‘scene construction’, defined as the generation and maintenance of a complex and coherent spatial context. Furthermore, I argued that this scene construction network might underpin other important cognitive functions besides episodic memory and imagination, such as navigation and thinking about the future. It is has been proposed that spatial context may act as the scaffold around which episodic memories are built. Given the hippocampus appears to play a critical role in imagination by supporting the creation of a rich coherent spatial scene, I sought to explore the nature of this hippocampal spatial code in a novel way. By combining high spatial resolution fMRI with multivariate pattern analysis techniques it proved possible to accurately determine where a subject was located in a virtual reality environment based solely on the pattern of activity across hippocampal voxels. For this to have been possible, the hippocampal population code must be large and non-uniform. I then extended these techniques to the domain of episodic memory by showing that individual memories could be accurately decoded from the pattern of activity across hippocampal voxels, thus identifying individual memory traces. I consider these findings together with other recent advances in the episodic memory field, and present a new perspective on the role of the hippocampus in episodic recollection. I discuss how this new (and preliminary) framework compares with current prevailing theories of hippocampal function, and suggest how it might account for some previously contradictory data

    SCAN: Learning Hierarchical Compositional Visual Concepts

    Get PDF
    The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts

    A goal direction signal in the human entorhinal/subicular region

    Get PDF
    Being able to navigate to a safe place, such as a home or nest, is a fundamental behaviour for all complex animals. Determining the direction to such goals is a crucial first step in navigation. Surprisingly, little is known about how, or where in the brain, this 'goal direction signal' is represented. In mammals 'head-direction cells' are thought to support this process, but despite 30 years of research no evidence for a goal direction representation has been reported [1, 2]. Here we used functional magnetic resonance imaging to record neural activity while participants made goal directions judgments based on a previously learned virtual environment. We applied multivoxel pattern analysis [3-5] to this data, and found that the human entorhinal/subicular region contains a neural representation of intended goal direction. Furthermore, the neural pattern expressed for a given goal direction matched the pattern expressed when simply facing that same direction. This suggests the existence of a shared neural representation of both goal and facing direction. We argue that this reflects a mechanism based on head-direction populations that simulate future goal directions during route planning [6]. Our data further revealed that the strength of direction information predicts performance. Finally, we found a dissociation between this geocentric information in the entorhinal/subicular region and egocentric direction information in the precuneus

    Semantic representations in the temporal pole predict false memories

    Get PDF
    Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference. Whereas previous studies have found that a diverse set of regions show some involvement in semantic false memory, none have revealed the nature of the semantic representations underpinning the phenomenon. Here we use fMRI with representational similarity analysis to search for a neural code consistent with semantic false memory. We find clear evidence that false memories emerge from a similarity-based neural code in the temporal pole, a region that has been called the "semantic hub" of the brain. We further show that each individual has a partially unique semantic code within the temporal pole, and this unique code can predict idiosyncratic patterns of memory errors. Finally, we show that the same neural code can also predict variation in true-memory performance, consistent with an adaptive perspective on false memory. Taken together, our findings reveal the underlying structure of neural representations of semantic knowledge, and how this semantic structure can both enhance and distort our memories

    How cognitive and reactive fear circuits optimize escape decisions in humans

    Get PDF
    Flight initiation distance (FID), the distance at which an organism flees from an approaching threat, is an ecological metric of cost–benefit functions of escape decisions. We adapted the FID paradigm to investigate how fast- or slow-attacking “virtual predators” constrain escape decisions. We show that rapid escape decisions rely on “reactive fear” circuits in the periaqueductal gray and midcingulate cortex (MCC), while protracted escape decisions, defined by larger buffer zones, were associated with “cognitive fear” circuits, which include posterior cingulate cortex, hippocampus, and the ventromedial prefrontal cortex, circuits implicated in more complex information processing, cognitive avoidance strategies, and behavioral flexibility. Using a Bayesian decision-making model, we further show that optimization of escape decisions under rapid flight were localized to the MCC, a region involved in adaptive motor control, while the hippocampus is implicated in optimizing decisions that update and control slower escape initiation. These results demonstrate an unexplored link between defensive survival circuits and their role in adaptive escape decisions
    • 

    corecore