5,152 research outputs found

    Delegation to autonomous agents promotes cooperation in collective-risk dilemmas

    Full text link
    Home assistant chat-bots, self-driving cars, drones or automated negotiations are some of the several examples of autonomous (artificial) agents that have pervaded our society. These agents enable the automation of multiple tasks, saving time and (human) effort. However, their presence in social settings raises the need for a better understanding of their effect on social interactions and how they may be used to enhance cooperation towards the public good, instead of hindering it. To this end, we present an experimental study of human delegation to autonomous agents and hybrid human-agent interactions centered on a public goods dilemma shaped by a collective risk. Our aim to understand experimentally whether the presence of autonomous agents has a positive or negative impact on social behaviour, fairness and cooperation in such a dilemma. Our results show that cooperation increases when participants delegate their actions to an artificial agent that plays on their behalf. Yet, this positive effect is reduced when humans interact in hybrid human-agent groups. Finally, we show that humans are biased towards agent behaviour, assuming that they will contribute less to the collective effort

    The European Commission as a Constraint on its own Antitrust Policy

    Get PDF
    Although the legal and the political-scientific literatures on European competition policy (‘ECP’) are vast, there is no work that goes beyond the rationalization of stylized historical and/or legal facts. This approach may be justified on grounds of the political complexity of ECP and/or the heterogeneity of units of analysis. Nevertheless, the failure to come up with a positive device that identifies conditions under which specific policy decisions may or may not be possible has limited our assessments of the policy to value judgments rather than to true explanations. This paper attempts to remedy this situation by offering a logically complete and internally consistent model of ECP decision-making procedures. I show how the dependence of the European antitrust regulator (DG COMP) on a heterogeneous, multi-task and collegial organization (the Commission) severely constrains the feasible policy options of the former, and I argue that the nature and the goals of ECP are a function of (a) the ability of DG COMP to rely on national authorities, and (b) the distance between the ideal policy points of, on the one hand, the pivotal Directorate General in the Commission and, on the other hand, DG COMP and its internal opponents. Empirical work should follow

    Goal-Oriented Monitoring Adaptation : methodology and patterns

    Get PDF
    International audienceThis paper argues that autonomic systems need to make their distributed monitoring adaptive in order to improve their “comprehensive” resulting quality; that means both the Quality of Service (QoS), and the Quality of Information (QoI). Thus, we propose a methodology to design monitoring adaptation based on high level objectives (goals) related to the management of quality requirements. One of the advantages of adopting a methodological approach, is that monitoring reconfiguration will be conducted through a consistent adaptation logic. Starting from a model-guided monitoring framework, we introduce our methodology to assist human administrators in eliciting the appropriate quality goals piloting the monitoring. Moreover, some monitoring adaptation patterns falling into reconfiguration dimensions are suggested and exploited in a cloud provider case-study illustrating the adaptation of Quality-Oriented monitoring

    Trustworthy AI Alone Is Not Enough

    Get PDF
    The aim of this book is to make accessible to both a general audience and policymakers the intricacies involved in the concept of trustworthy AI. In this book, we address the issue from philosophical, technical, social, and practical points of view. To do so, we start with a summary definition of Trustworthy AI and its components, according to the HLEG for AI report. From there, we focus in detail on trustworthy AI in large language models, anthropomorphic robots (such as sex robots), and in the use of autonomous drones in warfare, which all pose specific challenges because of their close interaction with humans. To tie these ideas together, we include a brief presentation of the ethical validation scheme for proposals submitted under the Horizon Europe programme as a possible way to address the operationalisation of ethical regulation beyond rigid rules and partial ethical analyses. We conclude our work by advocating for the virtue ethics approach to AI, which we view as a humane and comprehensive approach to trustworthy AI that can accommodate the pace of technological change

    Calibrating Users’ Mental Models for Delegation to AI

    Get PDF
    Artificial intelligence (AI) has the potential to dramatically change the way decisions are made and organizations are managed. As of today, AI is mostly applied as a collaboration partner for humans, amongst others through delegation of tasks. However, it remains to be explored how AI should be optimally designed to enable effective human-AI collaboration through delegation. We analyze influences on human delegation behavior towards AI by studying whether increasing users\u27 knowledge of AI\u27s error boundaries leads to improved delegation behavior and trust in AI. Specifically, we analyze the effect of showing AI\u27s certainty score and outcome feedback alone and in combination using a 2x2 between-subject experiment with 560 subjects. We find that providing both pieces of information can have a positive effect on collaborative performance, delegation behavior, and users\u27 trust in AI. Our findings contribute to the design of AI for collaborative settings and motivate research on factors promoting delegation to AI

    Algorithmically Controlled Automated Decision-Making and Societal Acceptability: Does Algorithm Type Matter?

    Get PDF
    As technological capabilities expand, an increasing number of decision-making processes (e.g., rankings, selections, exclusions) are being delegated to computerized systems. In this paper, we examine the societal acceptability of a consequential decision-making system (university admission) to those subject to the decision (i.e., applicants). We analyze two key drivers: the nature of the decision-making agent (a human vs an algorithm) and the decision-making logic used by the agents (predetermined vs emerging). Consistent with uniqueness neglect theory, we propose that applicants will be more positive toward the use of human agents compared to computerized systems. Consistent with the theory of procedural justice, we further argue that applicants will find the use of a predetermined logic to be more acceptable than an emerging logic. We present the details and results of a factorial survey designed to test our theoretical model

    Introduction : the Governance of Algorithms

    Get PDF
    In our information societies, tasks and decisions are increasingly outsourced to automated systems, machines, and artificial agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. This raises a critical issue: how are algorithmic procedures and applications to be appraised and governed? This question needs to be investigated, if one wishes to avoid the traps of ICTs ending up in isolating humans behind their screens and digital delegates, or harnessing them in a passive role, by curtailing their freedom and autonomy

    For What It's Worth: Humans Overwrite Their Economic Self-interest to Avoid Bargaining With AI Systems

    Get PDF
    As algorithms are increasingly augmenting and substituting human decision-making, understanding how the introduction of computational agents changes the fundamentals of human behavior becomes vital. This pertains to not only users, but also those parties who face the consequences of an algorithmic decision. In a controlled experiment with 480 participants, we exploit an extended version of two-player ultimatum bargaining where responders choose to bargain with either another human, another human with an AI decision aid or an autonomous AI-system acting on behalf of a passive human proposer. Our results show strong responder preferences against the algorithm, as most responders opt for a human opponent and demand higher compensation to reach a contract with autonomous agents. To map these preferences to economic expectations, we elicit incentivized subject beliefs about their opponent's behavior. The majority of responders maximize their expected value when this is line with approaching the human proposer. In contrast, responders predicting income maximization for the autonomous AI-system overwhelmingly override economic self-interest to avoid the algorithm
    • 

    corecore