2 research outputs found

    Delegation to autonomous agents promotes cooperation in collective-risk dilemmas

    Full text link
    Home assistant chat-bots, self-driving cars, drones or automated negotiations are some of the several examples of autonomous (artificial) agents that have pervaded our society. These agents enable the automation of multiple tasks, saving time and (human) effort. However, their presence in social settings raises the need for a better understanding of their effect on social interactions and how they may be used to enhance cooperation towards the public good, instead of hindering it. To this end, we present an experimental study of human delegation to autonomous agents and hybrid human-agent interactions centered on a public goods dilemma shaped by a collective risk. Our aim to understand experimentally whether the presence of autonomous agents has a positive or negative impact on social behaviour, fairness and cooperation in such a dilemma. Our results show that cooperation increases when participants delegate their actions to an artificial agent that plays on their behalf. Yet, this positive effect is reduced when humans interact in hybrid human-agent groups. Finally, we show that humans are biased towards agent behaviour, assuming that they will contribute less to the collective effort

    Increasing Fairness by Delegating Decisions to Autonomous Agents

    No full text
    There has been growing interest in autonomous agents that act on our behalf, or represent us, across various domains such as negotiation, transportation, health, finance, and defense. As these agent representatives become immersed in society, it is critical we understand whether and, if so, how they disrupt the traditional patterns of interaction with others. In this paper, we study how programming agents to represent us, shapes our decisions in social settings. Here we show that, when acting through agent representatives, people are considerably less likely to accept unfair offers from others, when compared to direct interaction with others. This result, thus, demonstrates that agent representatives have the potential to promote fairer outcomes. Moreover, we show that this effect can also occur when people are asked to "program" human representatives, thus revealing that the act of programming itself can promote fairer behavior. We argue this happens because programming requires the programmer to deliberate on all possible situations that might arise and, thus, promote consideration of social norms -- such as fairness -- when making their decisions. These results have important theoretical, practical, and ethical implications for designing and the nature of people's decision making when they act through agents that act on our behalf
    corecore