172 research outputs found

    Contextual novelty changes reward representations in the striatum

    Get PDF
    Reward representation in ventral striatum is boosted by perceptual novelty, although the mechanism of this effect remains elusive. Animal studies indicate a functional loop (Lisman and Grace, 2005) that includes hippocampus, ventral striatum, and midbrain as being important in regulating salience attribution within the context of novel stimuli. According to this model, reward responses in ventral striatum or midbrain should be enhanced in the context of novelty even if reward and novelty constitute unrelated, independent events. Using fMRI, we show that trials with reward-predictive cues and subsequent outcomes elicit higher responses in the striatum if preceded by an unrelated novel picture, indicating that reward representation is enhanced in the context of novelty. Notably, this effect was observed solely when reward occurrence, and hence reward-related salience, was low. These findings support a view that contextual novelty enhances neural responses underlying reward representation in the striatum and concur with the effects of novelty processing as predicted by the model of Lisman and Grace (2005)

    Contextual novelty modulates the neural dynamics of reward anticipation

    Get PDF
    We investigated how rapidly the reward-predicting properties of visual cues are signaled in the human brain and the extent these reward prediction signals are contextually modifiable. In a magnetoencephalography study, we presented participants with fractal visual cues that predicted monetary rewards with different probabilities. These cues were presented in the temporal context of a preceding novel or familiar image of a natural scene. Starting at similar to 100 ms after cue onset, reward probability was signaled in the event-related fields (ERFs) over temporo-occipital sensors and in the power of theta (5-8 Hz) and beta (20-30 Hz) band oscillations over frontal sensors. While theta decreased with reward probability beta power showed the opposite effect. Thus, in humans anticipatory reward responses are generated rapidly, within 100 ms after the onset of reward-predicting cues, which is similar to the timing established in non-human primates. Contextual novelty enhanced the reward anticipation responses in both ERFs and in beta oscillations starting at similar to 100 ms after cue onset. This very early context effect is compatible with a physiological model that invokes the mediation of a hippocampal-VTA loop according to which novelty modulates neural response properties within the reward circuitry. We conclude that the neural processing of cues that predict future rewards is temporally highly efficient and contextually modifiable

    Dopamine restores reward prediction errors in old age.

    Get PDF
    Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. Using functional magnetic resonance imaging in humans, we found that healthy older adults had an abnormal signature of expected value, resulting in an incomplete reward prediction error (RPE) signal in the nucleus accumbens, a brain region that receives rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum, measured by diffusion tensor imaging, was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to the level of young adults. This drug effect was linked to restoration of a canonical neural RPE. Our results identify a neurochemical signature underlying abnormal reward processing in older adults and indicate that this can be modulated by L-DOPA

    Structural integrity of the substantia nigra and subthalamic nucleus predicts flexibility of instrumental learning in older-age individuals

    Get PDF
    Flexible instrumental learning is required to harness the appropriate behaviors to obtain rewards and to avoid punishments. The precise contribution of dopaminergic midbrain regions (substantia nigra/ventral tegmental area [SN/VTA]) to this form of behavioral adaptation remains unclear. Normal aging is associated with a variable loss of dopamine neurons in the SN/VTA. We therefore tested the relationship between flexible instrumental learning and midbrain structural integrity. We compared task performance on a probabilistic monetary go/no-go task, involving trial and error learning of: "go to win," "no-go to win," "go to avoid losing," and "no-go to avoid losing" in 42 healthy older adults to previous behavioral data from 47 younger adults. Quantitative structural magnetization transfer images were obtained to index regional structural integrity. On average, both some younger and some older participants demonstrated a behavioral asymmetry whereby they were better at learning to act for reward ("go to win" > "no-go to win"), but better at learning not to act to avoid punishment ("no-go to avoid losing" > "go to avoid losing"). Older, but not younger, participants with greater structural integrity of the SN/VTA and the adjacent subthalamic nucleus could overcome this asymmetry. We show that interindividual variability among healthy older adults of the structural integrity within the SN/VTA and subthalamic nucleus relates to effective acquisition of competing instrumental responses

    Synchronization of medial temporal lobe and prefrontal rhythms in human decision-making

    Get PDF
    Optimal decision making requires that we integrate mnemonic information regarding previous decisions with value signals that entail likely rewards and punishments. The fact that memory and value signals appear to be coded by segregated brain regions, the hippocampus in the case of memory and sectors of prefrontal cortex in the case of value, raises the question as to how they are integrated during human decision making. Using magnetoencephalography to study healthy human participants, we show increased theta oscillations over frontal and temporal sensors during nonspatial decisions based on memories from previous trials. Using source reconstruction we found that the medial temporal lobe (MTL), in a location compatible with the anterior hippocampus, and the anterior cingulate cortex in the medial wall of the frontal lobe are the source of this increased theta power. Moreover, we observed a correlation between theta power in the MTL source and behavioral performance in decision making, supporting a role for MTL theta oscillations in decision-making performance. These MTL theta oscillations were synchronized with several prefrontal sources, including lateral superior frontal gyrus, dorsal anterior cingulate gyrus, and medial frontopolar cortex. There was no relationship between the strength of synchronization and the expected value of choices. Our results indicate a mnemonic guidance of human decision making, beyond anticipation of expected reward, is supported by hippocampal–prefrontal theta synchronization

    Arbitration between controlled and impulsive choices.

    Get PDF
    The impulse to act for immediate reward often conflicts with more deliberate evaluations that support long-term benefit. The neural architecture that negotiates this conflict remains unclear. One account proposes a single neural circuit that evaluates both immediate and delayed outcomes, while another outlines separate impulsive and patient systems that compete for behavioral control. Here we designed a task in which a complex payout structure divorces the immediate value of acting from the overall long-term value, within the same outcome modality. Using model-based fMRI in humans, we demonstrate separate neural representations of immediate and long-term values, with the former tracked in the anterior caudate (AC) and the latter in the ventromedial prefrontal cortex (vmPFC). Crucially, when subjects' choices were compatible with long-run consequences, value signals in AC were down-weighted and those in vmPFC were enhanced, while the opposite occurred when choice was impulsive. Thus, our data implicate a trade-off in value representation between AC and vmPFC as underlying controlled versus impulsive choice

    Manipulating the contribution of approach-avoidance to the perturbation of economic choice by valence.

    Get PDF
    Economic choices are strongly influenced by whether potential outcomes entail gains or losses. We examined this influence of outcome valence in an economic risk task. We employed three experiments based on our task, each of which provided novel findings, and which together better characterize and explain how outcome valence influences risky choice. First, we found that valence perturbed an individual's choices around that individual's base-level of risk-taking, a base-level consistent across time, and context. Second, this perturbation by valence was highly context dependent, emerging when valence was introduced as a dimension within a decision-making setting, and being reversed by a change in task format (causing more gambling for gains than losses and the reverse). Third, we show this perturbation by valence is explicable by low-level approach-avoidance processes, an hypothesis not previously tested by a causal manipulation. We revealed such an effect, where individuals were less disposed to choose a riskier option with losses when they had to approach (go) as opposed to avoid (nogo) that option. Our data show valence perturbs an individual's choices independently of the impact of risk, and causally implicate approach-avoidance processes as important in shaping economic choice

    Dopamine, Salience, and Response Set Shifting in Prefrontal Cortex.

    Get PDF
    Dopamine is implicated in multiple functions, including motor execution, action learning for hedonically salient outcomes, maintenance, and switching of behavioral response set. Here, we used a novel within-subject psychopharmacological and combined functional neuroimaging paradigm, investigating the interaction between hedonic salience, dopamine, and response set shifting, distinct from effects on action learning or motor execution. We asked whether behavioral performance in response set shifting depends on the hedonic salience of reversal cues, by presenting these as null (neutral) or salient (monetary loss) outcomes. We observed marked effects of reversal cue salience on set-switching, with more efficient reversals following salient loss outcomes. l-Dopa degraded this discrimination, leading to inappropriate perseveration. Generic activation in thalamus, insula, and striatum preceded response set switches, with an opposite pattern in ventromedial prefrontal cortex (vmPFC). However, the behavioral effect of hedonic salience was reflected in differential vmPFC deactivation following salient relative to null reversal cues. l-Dopa reversed this pattern in vmPFC, suggesting that its behavioral effects are due to disruption of the stability and switching of firing patterns in prefrontal cortex. Our findings provide a potential neurobiological explanation for paradoxical phenomena, including maintenance of behavioral set despite negative outcomes, seen in impulse control disorders in Parkinson's disease

    Action Dominates Valence in Anticipatory Representations in the Human Striatum and Dopaminergic Midbrain

    Get PDF
    The acquisition of reward and the avoidance of punishment could logically be contingent on either emitting or withholding particular actions. However,the separate pathways inthe striatumfor go and no-go appearto violatethis independence, instead coupling affect and effect. Respect for this interdependence has biased many studies of reward and punishment, so potential action- outcome valence interactions during anticipatory phases remain unexplored. In a functional magnetic resonance imaging study with healthy human volunteers, we manipulated subjects" requirement to emit or withhold an action independent from subsequent receipt of reward or avoidance of punishment. During anticipation, in the striatum and a lateral region within the substantia nigra/ventral tegmental area (SN/VTA), action representations dominated over valence representations. Moreover, we did not observe any representation associated with different state values through accumulation of outcomes, challenging a conventional and dominant association between these areas and state value representations. In contrast, a more medial sector of the SN/VTA responded preferentially to valence, with opposite signs depending on whether action was anticipatedto be emitted or withheld. This dominant influence of action requires an enriched notion of opponency between reward and punishment

    The Role of the Striatum in Learning to Orthogonalize CD Action and Valence: A Combined PET and 7 T MRI Aging Study

    Get PDF
    Pavlovian biases influence instrumental learning by coupling reward seeking with action invigoration and punishment avoidance with action suppression. Using a probabilistic go/no-go task designed to orthogonalize action (go/no-go) and valence (reward/punishment), recent studies have shown that the interaction between the two is dependent on the striatum and its key neuromodulator dopamine. Using this task, we sought to identify how structural and neuromodulatory age-related differences in the striatum may influence Pavlovian biases and instrumental learning in 25 young and 31 older adults. Computational modeling revealed a significant age-related reduction in reward and punishment sensitivity and marked (albeit not significant) reduction in learning rate and lapse rate (irreducible noise). Voxel-based morphometry analysis using 7 Tesla MRI images showed that individual differences in learning rate in older adults were related to the volume of the caudate nucleus. In contrast, dopamine synthesis capacity in the dorsal striatum, assessed using [18F]-DOPA positron emission tomography in 22 of these older adults, was not associated with learning performance and did not moderate the relationship between caudate volume and learning rate. This multiparametric approach suggests that age-related differences in striatal volume may influence learning proficiency in old age
    • …
    corecore