10 research outputs found

    Control, agency and reinforcement learning in human decision-making

    No full text
    Le sentiment d’agentivité est défini comme le sentiment de contrôler nos actions, et à travers elles, les évènements du monde extérieur. Cet ensemble phénoménologique dépend de notre capacité d’apprendre les contingences entre nos actions et leurs résultats, et un algorithme classique pour modéliser cela vient du domaine de l’apprentissage par renforcement. Dans cette thèse, nous avons utilisé l’approche de modélisation cognitive pour étudier l’interaction entre agentivité et apprentissage par renforcement. Tout d’abord, les participants réalisant une tâche d’apprentissage par renforcement tendent à avoir plus d’agentivité. Cet effet est logique, étant donné que l’apprentissage par renforcement consiste à associer une action volontaire et sa conséquence. Mais nous avons aussi découvert que l’agentivité influence l’apprentissage de deux manières. Le mode par défaut pour apprendre des contingences action-conséquence est que nos actions ont toujours un pouvoir causal. De plus, simplement choisir une action change l’apprentissage de sa conséquence. En conclusion, l’agentivité et l’apprentissage par renforcement, deux piliers de la psychologie humaine, sont fortement liés. Contrairement à des ordinateurs, les humains veulent être en contrôle, et faire les bons choix, ce qui biaise notre aquisition d’information.Sense of agency or subjective control can be defined by the feeling that we control our actions, and through them effects in the outside world. This cluster of experiences depend on the ability to learn action-outcome contingencies and a more classical algorithm to model this originates in the field of human reinforcementlearning. In this PhD thesis, we used the cognitive modeling approach to investigate further the interaction between perceived control and reinforcement learning. First, we saw that participants undergoing a reinforcement-learning task experienced higher agency; this influence of reinforcement learning on agency comes as no surprise, because reinforcement learning relies on linking a voluntary action and its outcome. But our results also suggest that agency influences reinforcement learning in two ways. We found that people learn actionoutcome contingencies based on a default assumption: their actions make a difference to the world. Finally, we also found that the mere fact of choosing freely shapes the learning processes following that decision. Our general conclusion is that agency and reinforcement learning, two fundamental fields of human psychology, are deeply intertwined. Contrary to machines, humans do care about being in control, or about making the right choice, and this results in integrating information in a one-sided way

    Contrôle, agentivité et apprentissage par renforcement

    Get PDF
    Sense of agency or subjective control can be defined by the feeling that we control our actions, and through them effects in the outside world. This cluster of experiences depend on the ability to learn action-outcome contingencies and a more classical algorithm to model this originates in the field of human reinforcementlearning. In this PhD thesis, we used the cognitive modeling approach to investigate further the interaction between perceived control and reinforcement learning. First, we saw that participants undergoing a reinforcement-learning task experienced higher agency; this influence of reinforcement learning on agency comes as no surprise, because reinforcement learning relies on linking a voluntary action and its outcome. But our results also suggest that agency influences reinforcement learning in two ways. We found that people learn actionoutcome contingencies based on a default assumption: their actions make a difference to the world. Finally, we also found that the mere fact of choosing freely shapes the learning processes following that decision. Our general conclusion is that agency and reinforcement learning, two fundamental fields of human psychology, are deeply intertwined. Contrary to machines, humans do care about being in control, or about making the right choice, and this results in integrating information in a one-sided way.Le sentiment d’agentivité est défini comme le sentiment de contrôler nos actions, et à travers elles, les évènements du monde extérieur. Cet ensemble phénoménologique dépend de notre capacité d’apprendre les contingences entre nos actions et leurs résultats, et un algorithme classique pour modéliser cela vient du domaine de l’apprentissage par renforcement. Dans cette thèse, nous avons utilisé l’approche de modélisation cognitive pour étudier l’interaction entre agentivité et apprentissage par renforcement. Tout d’abord, les participants réalisant une tâche d’apprentissage par renforcement tendent à avoir plus d’agentivité. Cet effet est logique, étant donné que l’apprentissage par renforcement consiste à associer une action volontaire et sa conséquence. Mais nous avons aussi découvert que l’agentivité influence l’apprentissage de deux manières. Le mode par défaut pour apprendre des contingences action-conséquence est que nos actions ont toujours un pouvoir causal. De plus, simplement choisir une action change l’apprentissage de sa conséquence. En conclusion, l’agentivité et l’apprentissage par renforcement, deux piliers de la psychologie humaine, sont fortement liés. Contrairement à des ordinateurs, les humains veulent être en contrôle, et faire les bons choix, ce qui biaise notre aquisition d’information

    Try and try again: Post-error boost of an implicit measure of agency

    No full text
    he sense of agency refers to the feeling that we control our actions and, through them, effects in the outside world. Reinforcement learning provides an important theoretical framework for understanding why people choose to make particular actions. Few previous studies have considered how reinforcement and learning might influence the subjective experience of agency over actions and outcomes. In two experiments, participants chose between two action alternatives, which differed in reward probability. Occasional reversals of action-reward mapping required participants to monitor outcomes and adjust action selection processing accordingly. We measured shifts in the perceived times of actions and subsequent outcomes ('intentional binding') as an implicit proxy for sense of agency. In the first experiment, negative outcomes showed stronger binding towards the preceding action, compared to positive outcomes. Furthermore, negative outcomes were followed by increased binding of actions towards their outcome on the following trial. Experiment 2 replicated this post-error boost in action binding and showed that it only occurred when people could learn from their errors to improve action choices. We modelled the post-error boost using an established quantitative model of reinforcement learning. The post-error boost in action binding correlated positively with participants' tendency to learn more from negative outcomes than from positive outcomes. Our results suggest a novel relation between sense of agency and reinforcement learning, in which sense of agency is increased when negative outcomes trigger adaptive changes in subsequent action selection processing

    Monitoring misinformation related interventions by Facebook, Twitter and YouTube: methods and illustration

    No full text
    There is growing pressure for mainstream platforms, such as Facebook, Twitter or YouTube, to fight misinformation by moderating the content that spreads on their site. We investigated the interventions of the platforms by collecting social media data via APIs and scraping. These interventions can be classified into three broad categories: (i) temporary or permanent suspension of users, (ii) displaying flags and information panels, and (iii) reducing the visibility of some content. We provide examples illustrating how researchers can monitor the interventions within each of the three categories for each platform. Finally, we discuss the restrictions to access data and the lack of transparency regarding misinformation related interventions, and how to help the academic community, NGOs and data journalists to successfully study online misinformation

    Minet, a webmining CLI tool & library for python.

    No full text
    <p>A webmining CLI tool & library for python.</p&gt

    medialab/minet: v0.45.0

    No full text
    <p>A webmining CLI tool & library for python.</p&gt

    L’atelier de méthodes de Sciences Po : apprendre, aider, rassembler

    No full text
    The digital turn is changing our society, and the production of knowledge in the humanities and social sciences. To disseminate their expertise in digital methods, research engineers at Sciences Po created in 2013 a monthly methods workshop open to everyone: the METAT. The participants (researchers, students and others) sign up for a 3-hour session, and present their issue – which can be technical or methodological – to a team of supervisors from diverse backgrounds and profiles providing support that is as individualised as possible. In this article, we will explain the organisation and founding values of the METAT, as well as give our feedback as supervisors and organisers. This description will be enriched with the analysis of data built from the reports and registration form. We hope this article can give teachers in the humanities and social sciences information on the growing demand for digital training, provide feedback on a project-based digital teaching, and maybe inspire other organisations to create their own methods workshop

    Information about action outcomes differentially affects learning from self-determined versus imposed choices

    Get PDF
    International audienceThe valence of new information influences learning rates in humans: good news tends to receive more weight than bad news. We investigated this learning bias in four experiments, by systematically manipulating the source of required action (free versus forced choices), outcome contingencies (low versus high reward) and motor requirements (go versus no-go choices). Analysis of model-estimated learning rates showed that the confirmation bias in learning rates was specific to free choices, but was independent of outcome contingencies. The bias was also unaffected by the motor requirements, thus suggesting that it operates in the representational space of decisions, rather than motoric actions. Finally, model simulations revealed that learning rates estimated from the choice-confirmation model had the effect of maximizing performance across low- and high-reward environments. We therefore suggest that choice-confirmation bias may be adaptive for efficient learning of action–outcome contingencies, above and beyond fostering person-level dispositions such as self-esteem

    Minet, a webmining CLI tool & library for python.

    No full text
    <p>A webmining CLI tool & library for python.</p&gt
    corecore