9 research outputs found

    Pupil responses to pitch deviants reflect predictability of melodic sequences

    Get PDF
    Humans automatically detect events that, in deviating from their expectations, may signal prediction failure and a need to reorient behaviour. The pupil dilation response (PDR) to violations has been associated with subcortical signals of arousal and prediction resetting. However, it is unclear how the context in which a deviant occurs affects the size of the PDR. Using ecological musical stimuli that we characterised using a computational model, we showed that the PDR to pitch deviants is sensitive to contextual uncertainty (quantified as entropy), whereby the PDR was greater in low than high entropy contexts. The PDR was also positively correlated with unexpectedness of notes. No effects of music expertise were found, suggesting a ceiling effect due to enculturation. These results show that the same sudden environmental change can lead to differing arousal levels depending on contextual factors, providing evidence for a sensitivity of the PDR to long-term context

    The value of confidence: Confidence prediction errors drive value-based learning in the absence of external feedback

    No full text
    Reinforcement learning algorithms have a long-standing success story in explaining the dynamics of instrumental conditioning in humans and other species. While normative reinforcement learning models are critically dependent on external feedback, recent findings in the field of perceptual learning point to a crucial role of internally-generated reinforcement signals based on subjective confidence, when external feedback is not available. Here, we investigated the existence of such confidence-based learning signals in a key domain of reinforcement-based learning: instrumental conditioning. We conducted a value-based decision making experiment which included phases with and without external feedback and in which participants reported their confidence in addition to choices. Behaviorally, we found signatures of self-reinforcement in phases without feedback, reflected in an increase of subjective confidence and choice consistency. To clarify the mechanistic role of confidence in value-based learning, we compared a family of confidence-based learning models with more standard models predicting either no change in value estimates or a devaluation over time when no external reward is provided. We found that confidence-based models indeed outperformed these reference models, whereby the learning signal of the winning model was based on the prediction error between current confidence and a stimulus-unspecific average of previous confidence levels. Interestingly, individuals with more volatile reward-based value updates in the presence of feedback also showed more volatile confidence-based value updates when feedback was not available. Together, our results provide evidence that confidence-based learning signals affect instrumentally learned subjective values in the absence of external feedback

    Effect of value on rating changes (post-phase-2 minus pre-phase-2) in dependence of phase 2 duration.

    No full text
    Regression coefficient for the effect of value on rating changes across varying durations of phase 2. (TIF)</p

    Model recovery (2): probability that a dataset best fitted by model <i>fit</i> was generated by model <i>gen</i>.

    No full text
    Rows represent the datasets in which the given model was best-fitting and each column within a row indicates the probability that the datasets were generated by a particular model. Note that the order of models is the same along both axes, but labels were omitted on the x-axis due to space constraints. (TIF)</p

    Model evidence and N of free parameters.

    No full text
    Average Bayesian information criterion with s.e.m. across participants for all computational models considered and ordered by model fit. The number of parameters is displayed in parentheses. In line with the Akaike information criterion (see Fig 4 in the manuscript), ConfUnspec is the winning model. (TIF)</p

    Changes in choice consistency and subjective value ratings in phase 2.

    No full text
    (A) Choice consistency between first and second (in blue), as well as between second and third choice (in orange) for identical CS pairs in phase 2. (B) Subjective value ratings. Depicted are the changes of the subjective value ratings (post-phase-2 minus pre-phase-2), separately for each of the four CS value levels within a block.</p

    Performance and confidence.

    No full text
    Block-averaged time courses are separated according to the duration of phase 1 (9–18 trials) and aligned to the beginning of phase 2. Shaded areas indicate standard error of the mean. (A) Value-based learning. The accuracy of choices gradually increased across the phases with feedback (phases 1 and 3), indicating that participants successfully learned the task. (B) Confidence. Reported confidence (normalized to [0; 1]) likewise increases across the course of a block. Black lines indicate averages across CS value levels. (C) Confidence increases in phase 2 in dependence of the CS value level. The parameter estimate Ξ² and the p-value are based on a linear model with value level as IV and average confidence slope in phase 2 as DV.</p

    Latent variables and posterior predictive fits of model <i>ConfUnspec</i>.

    No full text
    All time courses represent averages across blocks and subjects, split according to the duration of phase 1 (line styles) and the four CS value levels within a block (colors). (A) Expected values indicate current beliefs about the value of each stimulus. (B) Posterior predictive fit for model performance: expected proportion correct responses based on choice probabilities. (C) Posterior predictive fit for model confidence. Model confidence is computed based on the choice probability for the chosen CS (normalized to the range 0–1). Black lines indicate averages across value levels. (D) Confidence slopes of (C) in phase 2 in dependence of the CS value level. (E) Expected confidence corresponds to an integration of past confidence experiences using a Rescorla-Wagner-type learning rule. (F) Confidence prediction errors indicate the deviation of a momentary confidence experience from expected confidence. (G) Absolute confidence prediction error.</p
    corecore