246 research outputs found
The Cognitive Architecture of Spatial Navigation: Hippocampal and Striatal Contributions
Spatial navigation can serve as a model system in cognitive neuroscience, in which specific neural representations, learning rules, and control strategies can be inferred from the vast experimental literature that exists across many species, including humans. Here, we review this literature, focusing on the contributions of hippocampal and striatal systems, and attempt to outline a minimal cognitive architecture that is consistent with the experimental literature and that synthesizes previous related computational modeling. The resulting architecture includes striatal reinforcement learning based on egocentric representations of sensory states and actions, incidental Hebbian association of sensory information with allocentric state representations in the hippocampus, and arbitration of the outputs of both systems based on confidence/uncertainty in medial prefrontal cortex. We discuss the relationship between this architecture and learning in model-free and model-based systems, episodic memory, imagery, and planning, including some open questions and directions for further experiments
Recommended from our members
The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation
On the development of intention understanding for joint action tasks
Our everyday, common sense ability to discern the intentions of others’ from their motions is fundamental for a successful cooperation in joint action tasks. In this paper we address in a modeling study the question of how the ability to understand complex goal-directed action sequences may develop
during learning and practice. The model architecture reflects recent neurophysiological findings that suggest the existence of chains of mirror neurons associated with specific goals.
These chains may be activated by external events to simulate the consequences of observed actions. Using the mathematical
framework of dynamical neural fields to model the dynamics of different neural populations representing goals, action means
and contextual cues, we show that such chains may develop based on a local, Hebbian learning rule. We validate the
functionality of the learned model in a joint action task in which an observer robot infers the intention of a partner to chose a complementary action sequence.Fundação para a Ciência e a Tecnologia (FCT)European Commission (EC
Neuronal Chains for Actions in the Parietal Lobe: A Computational Model
The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention)
Sentence Processing: Linking Language to Motor Chains
A growing body of evidence in cognitive science and neuroscience points towards the existence of a deep interconnection between cognition, perception and action. According to this embodied perspective language is grounded in the sensorimotor system and language understanding is based on a mental simulation process (Jeannerod, 2007; Gallese, 2008; Barsalou, 2009). This means that during action words and sentence comprehension the same perception, action, and emotion mechanisms implied during interaction with objects are recruited. Among the neural underpinnings of this simulation process an important role is played by a sensorimotor matching system known as the mirror neuron system (Rizzolatti and Craighero, 2004). Despite a growing number of studies, the precise dynamics underlying the relation between language and action are not yet well understood. In fact, experimental studies are not always coherent as some report that language processing interferes with action execution while others find facilitation. In this work we present a detailed neural network model capable of reproducing experimentally observed influences of the processing of action-related sentences on the execution of motor sequences. The proposed model is based on three main points. The first is that the processing of action-related sentences causes the resonance of motor and mirror neurons encoding the corresponding actions. The second is that there exists a varying degree of crosstalk between neuronal populations depending on whether they encode the same motor act, the same effector or the same action-goal. The third is the fact that neuronal populations’ internal dynamics, which results from the combination of multiple processes taking place at different time scales, can facilitate or interfere with successive activations of the same or of partially overlapping pools
The hippocampus and entorhinal cortex encode the path and Euclidean distances to goals during navigation
BACKGROUND
Despite decades of research on spatial memory, we know surprisingly little about how the brain guides navigation to goals. While some models argue that vectors are represented for navigational guidance, other models postulate that the future path is computed. Although the hippocampal formation has been implicated in processing spatial goal information, it remains unclear whether this region processes path- or vector-related information.
RESULTS
We report neuroimaging data collected from subjects navigating London's Soho district; these data reveal that both the path distance and the Euclidean distance to the goal are encoded by the medial temporal lobe during navigation. While activity in the posterior hippocampus was sensitive to the distance along the path, activity in the entorhinal cortex was correlated with the Euclidean distance component of a vector to the goal. During travel periods, posterior hippocampal activity increased as the path to the goal became longer, but at decision points, activity in this region increased as the path to the goal became closer and more direct. Importantly, sensitivity to the distance was abolished in these brain areas when travel was guided by external cues.
CONCLUSIONS
The results indicate that the hippocampal formation contains representations of both the Euclidean distance and the path distance to goals during navigation. These findings argue that the hippocampal formation houses a flexible guidance system that changes how it represents distance to the goal depending on the fluctuating demands of navigation
- …