9 research outputs found
How the cerebellum may monitor sensory information for spatial representation
The cerebellum has already been shown to participate in the navigation function. We propose here that this structure is involved in maintaining a sense of direction and location during self-motion by monitoring sensory information and interacting with navigation circuits to update the mental representation of space. To better understand the processing performed by the cerebellum in the navigation function, we have reviewed: the anatomical pathways that convey self-motion information to the cerebellum; the computational algorithm(s) thought to be performed by the cerebellum from these multi-source inputs; the cerebellar outputs directed toward navigation circuits and the influence of self-motion information on space-modulated cells receiving cerebellar outputs. This review highlights that the cerebellum is adequately wired to combine the diversity of sensory signals to be monitored during self-motion and fuel the navigation circuits. The direct anatomical projections of the cerebellum toward the head-direction cell system and the parietal cortex make those structures possible relays of the cerebellum influence on the hippocampal spatial map. We describe computational models of the cerebellar function showing that the cerebellum can filter out the components of the sensory signals that are predictable, and provides a novelty output. We finally speculate that this novelty output is taken into account by the navigation structures, which implement an update over time of position and stabilize perception during navigation
Recommended from our members
Belief state representation in the dopamine system
Learning to predict future outcomes is critical for driving appropriate behaviors. Reinforcement learning (RL) models have successfully accounted for such learning, relying on reward prediction errors (RPEs) signaled by midbrain dopamine neurons. It has been proposed that when sensory data provide only ambiguous information about which state an animal is in, it can predict reward based on a set of probabilities assigned to hypothetical states (called the belief state). Here we examine how dopamine RPEs and subsequent learning are regulated under state uncertainty. Mice are first trained in a task with two potential states defined by different reward amounts. During testing, intermediate-sized rewards are given in rare trials. Dopamine activity is a non-monotonic function of reward size, consistent with RL models operating on belief states. Furthermore, the magnitude of dopamine responses quantitatively predicts changes in behavior. These results establish the critical role of state inference in RL
How the cerebellum may monitor sensory information for spatial representation
The cerebellum has already been shown to participate in the navigation function. We propose here that this structure is involved in maintaining a sense of direction and location during self-motion by monitoring sensory information and interacting with navigation circuits to update the mental representation of space.To better understand the processing performed by the cerebellum in the navigation function, we have reviewed: the anatomical pathways that convey self-motion information to the cerebellum; the computational algorithm(s) thought to be performed by the cerebellum from these multi-source inputs; the cerebellar outputs directed toward navigation circuits and the influence of self-motion information on space-modulated cells receiving cerebellar outputs. This review highlights that the cerebellum is adequately wired to combine the diversity of sensory signals to be monitored during self-motion and fuel the navigation circuits. The direct anatomical projections of the cerebellum toward the head-direction cell system and the parietal cortex make those structures possible relays of the cerebellum influence on the hippocampal spatial map. We describe computational models of the cerebellar function showing that the cerebellum can filter out the components of the sensory signals that are predictable, and provides a novelty output. We finally speculate that this novelty output is taken into account by the navigation structures, which implement an update over time of position and stabilize perception during navigation
Recommended from our members
Opposite initialization to novel cues in dopamine signaling in ventral and posterior striatum in mice
Dopamine neurons are thought to encode novelty in addition to reward prediction error (the discrepancy between actual and predicted values). In this study, we compared dopamine activity across the striatum using fiber fluorometry in mice. During classical conditioning, we observed opposite dynamics in dopamine axon signals in the ventral striatum (‘VS dopamine’) and the posterior tail of the striatum (‘TS dopamine’). TS dopamine showed strong excitation to novel cues, whereas VS dopamine showed no responses to novel cues until they had been paired with a reward. TS dopamine cue responses decreased over time, depending on what the cue predicted. Additionally, TS dopamine showed excitation to several types of stimuli including rewarding, aversive, and neutral stimuli whereas VS dopamine showed excitation only to reward or reward-predicting cues. Together, these results demonstrate that dopamine novelty signals are localized in TS along with general salience signals, while VS dopamine reliably encodes reward prediction error. DOI: http://dx.doi.org/10.7554/eLife.21886.00
Recommended from our members
Dopamine reward prediction errors reflect hidden state inference across time
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a ‘belief state’). In this work, we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling exhibited a striking difference between two tasks that differed only with respect to whether reward was delivered deterministically. Our results favor an associative learning rule that combines cached values with hidden state inference
A hippocampo-cerebellar centred network for the learning and execution of sequence-based navigation
Abstract How do we translate self-motion into goal-directed actions? Here we investigate the cognitive architecture underlying self-motion processing during exploration and goal-directed behaviour. The task, performed in an environment with limited and ambiguous external landmarks, constrained mice to use self-motion based information for sequence-based navigation. The post-behavioural analysis combined brain network characterization based on c-Fos imaging and graph theory analysis as well as computational modelling of the learning process. The study revealed a widespread network centred around the cerebral cortex and basal ganglia during the exploration phase, while a network dominated by hippocampal and cerebellar activity appeared to sustain sequence-based navigation. The learning process could be modelled by an algorithm combining memory of past actions and model-free reinforcement learning, which parameters pointed toward a central role of hippocampal and cerebellar structures for learning to translate self-motion into a sequence of goal-directed actions