29,342 research outputs found

    Which Way Was I Going? Contextual Retrieval Supports the Disambiguation of Well Learned Overlapping Navigational Routes

    Get PDF
    Groundbreaking research in animals has demonstrated that the hippocampus contains neurons that distinguish betweenoverlapping navigational trajectories. These hippocampal neurons respond selectively to the context of specific episodes despite interference from overlapping memory representations. The present study used functional magnetic resonanceimaging in humans to examine the role of the hippocampus and related structures when participants need to retrievecontextual information to navigate well learned spatial sequences that share common elements. Participants were trained outside the scanner to navigate through 12 virtual mazes from a ground-level first-person perspective. Six of the 12 mazes shared overlapping components. Overlapping mazes began and ended at distinct locations, but converged in the middle to share some hallways with another maze. Non-overlapping mazes did not share any hallways with any other maze. Successful navigation through the overlapping hallways required the retrieval of contextual information relevant to thecurrent navigational episode. Results revealed greater activation during the successful navigation of the overlapping mazes compared with the non-overlapping mazes in regions typically associated with spatial and episodic memory, including thehippocampus, parahippocampal cortex, and orbitofrontal cortex. When combined with previous research, the current findings suggest that an anatomically integrated system including the hippocampus, parahippocampal cortex, and orbitofrontal cortexis critical for the contextually dependent retrieval of well learned overlapping navigational routes

    No gender differences in egocentric and allocentric environmental transformation after compensating for male advantage by manipulating familiarity

    Get PDF
    The present study has two-fold aims: to investigate whether gender differences persist even when more time is given to acquire spatial information; to assess the gender effect when the retrieval phase requires recalling the pathway from the same or a different reference perspective (egocentric or allocentric). Specifically, we analyse the performance of men and women while learning a path from a map or by observing an experimenter in a real environment. We then asked them to reproduce the learned path using the same reference system (map learning vs. map retrieval or real environment learning vs. real environment retrieval) or using a different reference system (map learning vs. real environment retrieval or vice versa). The results showed that gender differences were not present in the retrieval phase when women have the necessary time to acquire spatial information. Moreover, using the egocentric coordinates (both in the learning and retrieval phase) proved easier than the other conditions, whereas learning through allocentric coordinates and then retrieving the environmental information using egocentric coordinates proved to be the most difficult. Results showed that by manipulating familiarity, gender differences disappear, or are attenuated in all conditions

    Integrating Spatial Working Memory and Remote Memory: Interactions between the Medial Prefrontal Cortex and Hippocampus

    Get PDF
    In recent years, two separate research streams have focused on information sharing between the medial prefrontal cortex (mPFC) and hippocampus (HC). Research into spatial working memory has shown that successful execution of many types of behaviors requires synchronous activity in the theta range between the mPFC and HC, whereas studies of memory consolidation have shown that shifts in area dependency may be temporally modulated. While the nature of information that is being communicated is still unclear, spatial working memory and remote memory recall is reliant on interactions between these two areas. This review will present recent evidence that shows that these two processes are not as separate as they first appeared. We will also present a novel conceptualization of the nature of the medial prefrontal representation and how this might help explain this area’s role in spatial working memory and remote memory recall

    The spectro-contextual encoding and retrieval theory of episodic memory.

    Get PDF
    The spectral fingerprint hypothesis, which posits that different frequencies of oscillations underlie different cognitive operations, provides one account for how interactions between brain regions support perceptual and attentive processes (Siegel etal., 2012). Here, we explore and extend this idea to the domain of human episodic memory encoding and retrieval. Incorporating findings from the synaptic to cognitive levels of organization, we argue that spectrally precise cross-frequency coupling and phase-synchronization promote the formation of hippocampal-neocortical cell assemblies that form the basis for episodic memory. We suggest that both cell assembly firing patterns as well as the global pattern of brain oscillatory activity within hippocampal-neocortical networks represents the contents of a particular memory. Drawing upon the ideas of context reinstatement and multiple trace theory, we argue that memory retrieval is driven by internal and/or external factors which recreate these frequency-specific oscillatory patterns which occur during episodic encoding. These ideas are synthesized into a novel model of episodic memory (the spectro-contextual encoding and retrieval theory, or "SCERT") that provides several testable predictions for future research

    Differential recruitment of brain networks following route and cartographic map learning of spatial environments.

    Get PDF
    An extensive neuroimaging literature has helped characterize the brain regions involved in navigating a spatial environment. Far less is known, however, about the brain networks involved when learning a spatial layout from a cartographic map. To compare the two means of acquiring a spatial representation, participants learned spatial environments either by directly navigating them or learning them from an aerial-view map. While undergoing functional magnetic resonance imaging (fMRI), participants then performed two different tasks to assess knowledge of the spatial environment: a scene and orientation dependent perceptual (SOP) pointing task and a judgment of relative direction (JRD) of landmarks pointing task. We found three brain regions showing significant effects of route vs. map learning during the two tasks. Parahippocampal and retrosplenial cortex showed greater activation following route compared to map learning during the JRD but not SOP task while inferior frontal gyrus showed greater activation following map compared to route learning during the SOP but not JRD task. We interpret our results to suggest that parahippocampal and retrosplenial cortex were involved in translating scene and orientation dependent coordinate information acquired during route learning to a landmark-referenced representation while inferior frontal gyrus played a role in converting primarily landmark-referenced coordinates acquired during map learning to a scene and orientation dependent coordinate system. Together, our results provide novel insight into the different brain networks underlying spatial representations formed during navigation vs. cartographic map learning and provide additional constraints on theoretical models of the neural basis of human spatial representation

    Learning Visual Features from Snapshots for Web Search

    Full text link
    When applying learning to rank algorithms to Web search, a large number of features are usually designed to capture the relevance signals. Most of these features are computed based on the extracted textual elements, link analysis, and user logs. However, Web pages are not solely linked texts, but have structured layout organizing a large variety of elements in different styles. Such layout itself can convey useful visual information, indicating the relevance of a Web page. For example, the query-independent layout (i.e., raw page layout) can help identify the page quality, while the query-dependent layout (i.e., page rendered with matched query words) can further tell rich structural information (e.g., size, position and proximity) of the matching signals. However, such visual information of layout has been seldom utilized in Web search in the past. In this work, we propose to learn rich visual features automatically from the layout of Web pages (i.e., Web page snapshots) for relevance ranking. Both query-independent and query-dependent snapshots are considered as the new inputs. We then propose a novel visual perception model inspired by human's visual search behaviors on page viewing to extract the visual features. This model can be learned end-to-end together with traditional human-crafted features. We also show that such visual features can be efficiently acquired in the online setting with an extended inverted indexing scheme. Experiments on benchmark collections demonstrate that learning visual features from Web page snapshots can significantly improve the performance of relevance ranking in ad-hoc Web retrieval tasks.Comment: CIKM 201

    Fidelity metrics for virtual environment simulations based on spatial memory awareness states

    Get PDF
    This paper describes a methodology based on human judgments of memory awareness states for assessing the simulation fidelity of a virtual environment (VE) in relation to its real scene counterpart. To demonstrate the distinction between task performance-based approaches and additional human evaluation of cognitive awareness states, a photorealistic VE was created. Resulting scenes displayed on a headmounted display (HMD) with or without head tracking and desktop monitor were then compared to the real-world task situation they represented, investigating spatial memory after exposure. Participants described how they completed their spatial recollections by selecting one of four choices of awareness states after retrieval in an initial test and a retention test a week after exposure to the environment. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection and also included guesses, even if informed. Experimental results revealed variations in the distribution of participants’ awareness states across conditions while, in certain cases, task performance failed to reveal any. Experimental conditions that incorporated head tracking were not associated with visually induced recollections. Generally, simulation of task performance does not necessarily lead to simulation of the awareness states involved when completing a memory task. The general premise of this research focuses on how tasks are achieved, rather than only on what is achieved. The extent to which judgments of human memory recall, memory awareness states, and presence in the physical and VE are similar provides a fidelity metric of the simulation in question
    corecore