8,903 research outputs found

    Dopamine restores reward prediction errors in old age.

    Get PDF
    Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. Using functional magnetic resonance imaging in humans, we found that healthy older adults had an abnormal signature of expected value, resulting in an incomplete reward prediction error (RPE) signal in the nucleus accumbens, a brain region that receives rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum, measured by diffusion tensor imaging, was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to the level of young adults. This drug effect was linked to restoration of a canonical neural RPE. Our results identify a neurochemical signature underlying abnormal reward processing in older adults and indicate that this can be modulated by L-DOPA

    A selective role for neuronal activity regulated pentraxin in the processing of sensory-specific incentive value

    Get PDF
    Neuronal activity regulated pentraxin (Narp) is a secreted neuronal product which clusters AMPA receptors and regulates excitatory synaptogenesis. Although Narp is selectively enriched in brain, its role in behavior is not known. As Narp is expressed prominently in limbic regions, we examined whether Narp deletion affects performance on tasks used to assess motivational consequences of food-rewarded learning. Narp knock-out (KO) mice were unimpaired in learning simple pavlovian discriminations, instrumental lever pressing, and in acquisition of at least two aspects of pavlovian incentive learning, conditioned reinforcement and pavlovian-instrumental transfer. In contrast, Narp deletion resulted in a substantial deficit in the ability to use specific outcome expectancies to modulate instrumental performance in a devaluation task. In this task, mice were trained to respond on two levers for two different rewards. After training, mice were prefed with one of the two rewards, devaluing it. Responding on both levers was then assessed in extinction. Whereas control mice showed a significant preference in responding on the lever associated with the nondevalued reward, Narp KO mice responded equally on both levers, failing to suppress responding on the lever associated with the devalued reward. Both groups consumed more of the nondevalued reward in a subsequent choice test, indicating Narp KO mice could distinguish between the rewards themselves. These data suggest Narp has a selective role in processing sensory-specific information necessary for appropriate devaluation performance, but not in general motivational effects of reward-predictive cues on performance

    Corticolimbic catecholamines in stress: A computational model of the appraisal of controllability

    Get PDF
    Appraisal of a stressful situation and the possibility to control or avoid it is thought to involve frontal-cortical mechanisms. The precise mechanism underlying this appraisal and its translation into effective stress coping (the regulation of physiological and behavioural responses) are poorly understood. Here, we propose a computational model which involves tuning motivational arousal to the appraised stressing condition. The model provides a causal explanation of the shift from active to passive coping strategies, i.e. from a condition characterised by high motivational arousal, required to deal with a situation appraised as stressful, to a condition characterised by emotional and motivational withdrawal, required when the stressful situation is appraised as uncontrollable/unavoidable. The model is motivated by results acquired via microdialysis recordings in rats and highlights the presence of two competing circuits dominated by different areas of the ventromedial prefrontal cortex: these are shown having opposite effects on several subcortical areas, affecting dopamine outflow in the striatum, and therefore controlling motivation. We start by reviewing published data supporting structure and functioning of the neural model and present the computational model itself with its essential neural mechanisms. Finally, we show the results of a new experiment, involving the condition of repeated inescapable stress, which validate most of the model's prediction

    Overlapping Prediction Errors in Dorsal Striatum During Instrumental Learning With Juice and Money Reward in the Human Brain

    Get PDF
    Prediction error signals have been reported in human imaging studies in target areas of dopamine neurons such as ventral and dorsal striatum during learning with many different types of reinforcers. However, a key question that has yet to be addressed is whether prediction error signals recruit distinct or overlapping regions of striatum and elsewhere during learning with different types of reward. To address this, we scanned 17 healthy subjects with functional magnetic resonance imaging while they chose actions to obtain either a pleasant juice reward (1 ml apple juice), or a monetary gain (5 cents) and applied a computational reinforcement learning model to subjects' behavioral and imaging data. Evidence for an overlapping prediction error signal during learning with juice and money rewards was found in a region of dorsal striatum (caudate nucleus), while prediction error signals in a subregion of ventral striatum were significantly stronger during learning with money but not juice reward. These results provide evidence for partially overlapping reward prediction signals for different types of appetitive reinforcers within the striatum, a finding with important implications for understanding the nature of associative encoding in the striatum as a function of reinforcer type

    Neural Prediction Errors Reveal a Risk-Sensitive Reinforcement-Learning Process in the Human Brain

    Get PDF
    Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric–psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms

    The Neural Mechanisms Underlying the Influence of Pavlovian Cues on Human Decision Making

    Get PDF
    In outcome-specific transfer, pavlovian cues that are predictive of specific outcomes bias action choice toward actions associated with those outcomes. This transfer occurs despite no explicit training of the instrumental actions in the presence of pavlovian cues. The neural substrates of this effect in humans are unknown. To address this, we scanned 23 human subjects with functional magnetic resonance imaging while they made choices between different liquid food rewards in the presence of pavlovian cues previously associated with one of these outcomes. We found behavioral evidence of outcome-specific transfer effects in our subjects, as well as differential blood oxygenation level-dependent activity in a region of ventrolateral putamen when subjects chose, respectively, actions consistent and inconsistent with the pavlovian-predicted outcome. Our results suggest that choosing an action incompatible with a pavlovian-predicted outcome might require the inhibition of feasible but nonselected action– outcome associations. The results of this study are relevant for understanding how marketing actions can affect consumer choice behavior as well as for how environmental cues can influence drug-seeking behavior in addiction

    Selective disruption of stimulus-reward learning in glutamate receptor gria 1 knockout mice.

    Get PDF
    Glutamatergic neurotransmission via AMPA receptors has been an important focus of studies investigating neuronal plasticity. AMPA receptor glutamate receptor 1 (GluR1) subunits play a critical role in long-term potentiation (LTP). Because LTP is thought to be the cellular substrate for learning, we investigated whether mice lacking the GluR1 subunit [gria1 knock-outs (KO)] were capable of learning a simple cue-reward association, and whether such cues were able to influence motivated behavior. Both gria1 KO and wild-type mice learned to associate a light/tone stimulus with food delivery, as evidenced by their approaching the reward after presentation of the cue. During subsequent testing phases, gria1 KO mice also displayed normal approach to the cue in the absence of the reward (Pavlovian approach) and normal enhanced responding for the reward during cue presentations (Pavlovian to instrumental transfer). However, the cue did not act as a reward for learning a new behavior in the KO mice (conditioned reinforcement). This pattern of behavior is similar to that seen with lesions of the basolateral nucleus of the amygdala (BLA), and correspondingly, gria1 KO mice displayed impaired acquisition of responding under a second-order schedule. Thus, mice lacking the GluR1 receptor displayed a specific deficit in conditioned reward, suggesting that GluR1-containing AMPA receptors are important in the synaptic plasticity in the BLA that underlies conditioned reinforcement. Immunostaining for GluR2/3 subunits revealed changes in GluR2/3 expression in the gria1 KOs in the BLA but not the central nucleus of the amygdala (CA), consistent with the behavioral correlates of BLA but not CA function

    DNMT3a in the hippocampal CA1 is crucial in the acquisition of morphine self‐administration in rats

    Get PDF
    Drug‐reinforced excessive operant responding is one fundamental feature of long-lasting addiction‐like behaviors and relapse in animals. However, the transcriptional regulatory mechanisms responsible for the persistent drug‐specific (not natural rewards) operant behavior are not entirely clear. In this study, we demonstrate a key role for one of the de novo DNA methyltransferase, DNMT3a, in the acquisition of morphine self‐administration (SA) in rats. The expression of DNMT3a in the hippocampal CA1 region but not in the nucleus accumbens shell was significantly up‐regulated after 1‐ and 7‐day morphine SA (0.3 mg/kg/infusion) but not after the yoked morphine injection. On the other hand, saccharin SA did not affect the expression of DNMT3a or DNMT3b. DNMT inhibitor 5‐aza‐2‐deoxycytidine (5‐aza) microinjected into the hippocampal CA1 significantly attenuated the acquisition of morphine SA. Knockdown of DNMT3a also impaired the ability to acquire the morphine SA. Overall, these findings suggest that DNMT3a in the hippocampus plays an important role in the acquisition of morphine SA and may be a valid target to prevent the development of morphine addiction. Includes Supplemental informatio
    corecore