4 research outputs found

    Artificial Intelligence Techniques for Conflict Resolution

    Get PDF
    Conflict resolution is essential to obtain cooperation in many scenarios such as politics and business, as well as our day to day life. The importance of conflict resolution has driven research in many fields like anthropology, social science, psychology, mathematics, biology and, more recently, in artificial intelligence. Computer science and artificial intelligence have, in turn, been inspired by theories and techniques from these disciplines, which has led to a variety of computational models and approaches, such as automated negotiation, group decision making, argumentation, preference aggregation, and human-machine interaction. To bring together the different research strands and disciplines in conflict resolution, the Workshop on Conflict Resolution in Decision Making (COREDEMA) was organized. This special issue benefited from the workshop series, and consists of significantly extended and revised selected papers from the ECAI 2016 COREDEMA workshop, as well as completely new contributions.Interactive Intelligenc

    Minimising the Rank Aggregation Error: (Extended Abstract)

    No full text
    Rank aggregation is the problem of generating an overall ranking from a set of individual votes which is as close as possible to the (unknown) correct ranking. The challenge is that votes are often both noisy and incomplete. Existing work focuses on the most likely ranking for a particular noise model. Instead, we focus on minimising the error, i.e., the expected distance between the aggregated ranking and the correct one. We show that this results in different rankings, and we show how to compute local improvements of rankings to reduce the error. Extensive experiments on both synthetic data based on Mallows' model and real data show that Copeland has a smaller error than the Kemeny rule, while the latter is the maximum likelihood estimator.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Algorithmic

    Responsibility research for trustworthy autonomous systems

    No full text
    To develop and effectively deploy Trustworthy Autonomous Systems (TAS), we face various social, technological, legal, and ethical challenges in which different notions of responsibility can play a key role. In this work, we elaborate on these challenges, discuss research gaps, and show how the multidimensional notion of responsibility can play a role to bridge them. We argue that TAS requires operational tools to represent and reason about responsibilities of humans as well as AI agents. We review major challenges to which responsibility reasoning can contribute, highlight open research problems, and argue for the application of multiagent responsibility models in a variety of TAS domains.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Interactive Intelligenc

    Reasoning about responsibility in autonomous systems: challenges and opportunities

    No full text
    Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e. situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g. for completing a task in future) and who can be seen as responsible retrospectively (e.g. for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of trustworthy autonomous systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.Interactive Intelligenc
    corecore