13 research outputs found

    DĂ©cision et action

    No full text
    International audienc

    Consistent patterns of distractor effects during decision making

    No full text
    International audienceThe value of a third potential option or distractor can alter the way in which decisions are made between two other options. Two hypotheses have received empirical support: that a high value distractor improves the accuracy with which decisions between two other options are made and that it impairs accuracy. Recently, however, it has been argued that neither observation is replicable. Inspired by neuroimaging data showing that high value distractors have different impacts on prefrontal and parietal regions, we designed a dual route decision-making model that mimics the neural signals of these regions. Here we show in the dual route model and empirical data that both enhancement and impairment effects are robust phenomena but predominate in different parts of the decision space defined by the options’ and the distractor’s values. However, beyond these constraints, both effects co-exist under similar conditions. Moreover, both effects are robust and observable in six experiments

    Task design and behavioral results.

    No full text
    <p><b>A</b>. From top to bottom, successive screen shots of example trials are shown with their duration for the three tasks (left: rating task, middle: force task, right: choice task). Every trial started with a fixation cross. In the force and rating tasks, a single composite proposition, with a gain G for the subject (YOU) and donation D for the charity organization (ORG) was displayed on the screen. Then a scale (for rating) and a thermometer (for force) respectively appeared on the screen, noticing subjects that it was time for providing a response. After response completion (rating or force), feedback on whether the proposition was won or lost was displayed. The probability of winning was fixated to 70% in the rating task and determined by the percentage of maximal force produced in the force task. A loss meant no money for both the subject and the charity. In the choice task, two composite options were displayed and choice was triggered by switching ‘or’ into ‘?’. Feedback was winning the chosen option in 70% of the trials, and nothing in the remaining 30%. <b>B</b>. Average ratings (left), forces (right) and values inferred from choices (right) are shown as functions of the amount of gain and donation. Cold to hot colors indicate low to high values. The value function used to fit the choices was the a priori function that served to optimize the design (linear model with interaction).</p

    Comparison of estimation efficiency.

    No full text
    <p><b>A</b>. Proportion of choices according to the difference between option values computed with the CES value function inferred either from the force task (red) or the rating task (green). Observed choices (circles) were fitted using logistic regression (continuous lines). Inset represents temperature estimates from logistic fits. <b>B</b>. Balanced accuracy according to the CES value function inferred from the force (red) and rating (green) tasks. <b>C</b>. Coefficient of determination R<sup>2</sup> for the fit of each task (Force, Rating and Choice). <b>D</b>. Response time in force and rating tasks. <b>E</b>. Convergence measure according to trial number (with optimized trial order) in the force (red), rating (green) and choice (blue) tasks. Error bars indicate S.E.M. Stars indicate significant differences between two tasks.</p

    Comparison of value functions and their parameters.

    No full text
    <p><b>A.</b> Comparison of value functions underlying behavior in the three tasks. Left: Estimated frequency for the family of models in which the three tasks are explained by the same value function and the family of models using different value functions. Right: Estimated frequencies obtained for the twelve models (value functions) belonging to the ‘same’ family. The winner is CES function (model 12), see equation on the graph, with V(G,D) the value of gain G and donation D, α the selfishness parameter and ÎŽ the concavity parameter. Dashed lines indicate chance levels (one over the number of models) <b>B.</b> Comparison of selfishness parameter across tasks. Left: Mean parameter estimates in the three tasks (F, R and C) separately and in the three tasks together (All). The dashed line (α = 0.5) indicates no bias toward one or the other dimension (gain or donation). Error bars indicates S.E.M. Middle: Estimated frequencies of models including one single selfishness parameter for the three tasks (=), three different selfishness parameters (~), or only one different from the two other (F~, R~, C~, with ‘X~’ standing for ‘task with a different parameter’). Dashed line indicates chance level. Right: Correlation of selfishness parameters between choice and force tasks (red) and between choice and rating tasks (green) across subjects. <b>C.</b> Same analysis as in B but for the concavity parameter. The dashed line in the left graph (ÎŽ = 1) corresponds to the linear model. Stars indicate significant differences between tasks.</p

    Anatomical dissociation of intracerebral signals for reward and punishment prediction errors in humans

    No full text
    International audienceWhether maximizing rewards and minimizing punishments rely on distinct brain systems remains debated, given inconsistent results coming from human neuroimaging and animal electrophysiology studies. Bridging the gap across techniques, we recorded intracerebral activity from twenty participants while they performed an instrumental learning task. We found that both reward and punishment prediction errors (PE), estimated from computational modeling of choice behavior, correlate positively with broadband gamma activity (BGA) in several brain regions. In all cases, BGA scaled positively with the outcome (reward or punishment versus nothing) and negatively with the expectation (predictability of reward or punishment). However, reward PE were better signaled in some regions (such as the ventromedial prefrontal and lateral orbitofrontal cortex), and punishment PE in other regions (such as the anterior insula and dorsolateral prefrontal cortex). These regions might therefore belong to brain systems that differentially contribute to the repetition of rewarded choices and the avoidance of punished choices.Whether maximizing rewards and minimizing punishments rely on distinct brain systems remains debated, given inconsistent results coming from human neuroimaging and animal electrophysiology studies. Bridging the gap across techniques, we recorded intracerebral activity from twenty participants while they performed an instrumental learning task. We found that both reward and punishment prediction errors (PE), estimated from computational modeling of choice behavior, correlate positively with broadband gamma activity (BGA) in several brain regions. In all cases, BGA scaled positively with the outcome (reward or punishment versus nothing) and negatively with the expectation (predictability of reward or punishment). However, reward PE were better signaled in some regions (such as the ventromedial prefrontal and lateral orbitofrontal cortex), and punishment PE in other regions (such as the anterior insula and dorsolateral prefrontal cortex). These regions might therefore belong to brain systems that differentially contribute to the repetition of rewarded choices and the avoidance of punished choices
    corecore