11,741 research outputs found

    Human-Agent Decision-making: Combining Theory and Practice

    Full text link
    Extensive work has been conducted both in game theory and logic to model strategic interaction. An important question is whether we can use these theories to design agents for interacting with people? On the one hand, they provide a formal design specification for agent strategies. On the other hand, people do not necessarily adhere to playing in accordance with these strategies, and their behavior is affected by a multitude of social and psychological factors. In this paper we will consider the question of whether strategies implied by theories of strategic behavior can be used by automated agents that interact proficiently with people. We will focus on automated agents that we built that need to interact with people in two negotiation settings: bargaining and deliberation. For bargaining we will study game-theory based equilibrium agents and for argumentation we will discuss logic-based argumentation theory. We will also consider security games and persuasion games and will discuss the benefits of using equilibrium based agents.Comment: In Proceedings TARK 2015, arXiv:1606.0729

    The Hanabi Challenge: A New Frontier for AI Research

    Full text link
    From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay with two to five players and imperfect information. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques for such theory of mind reasoning will not only be crucial for success in Hanabi, but also in broader collaborative efforts, especially those with human partners. To facilitate future research, we introduce the open-source Hanabi Learning Environment, propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence

    LLM-Based Agent Society Investigation: Collaboration and Confrontation in Avalon Gameplay

    Full text link
    This paper aims to investigate the open research problem of uncovering the social behaviors of LLM-based agents. To achieve this goal, we adopt Avalon, a representative communication game, as the environment and use system prompts to guide LLM agents to play the game. While previous studies have conducted preliminary investigations into gameplay with LLM agents, there lacks research on their social behaviors. In this paper, we present a novel framework designed to seamlessly adapt to Avalon gameplay. The core of our proposed framework is a multi-agent system that enables efficient communication and interaction among agents. We evaluate the performance of our framework based on metrics from two perspectives: winning the game and analyzing the social behaviors of LLM agents. Our results demonstrate the effectiveness of our framework in generating adaptive and intelligent agents and highlight the potential of LLM-based agents in addressing the challenges associated with dynamic social environment interaction. By analyzing the social behaviors of LLM agents from the aspects of both collaboration and confrontation, we provide insights into the research and applications of this domain

    Mechanisms for Automated Negotiation in State Oriented Domains

    Full text link
    This paper lays part of the groundwork for a domain theory of negotiation, that is, a way of classifying interactions so that it is clear, given a domain, which negotiation mechanisms and strategies are appropriate. We define State Oriented Domains, a general category of interaction. Necessary and sufficient conditions for cooperation are outlined. We use the notion of worth in an altered definition of utility, thus enabling agreements in a wider class of joint-goal reachable situations. An approach is offered for conflict resolution, and it is shown that even in a conflict situation, partial cooperative steps can be taken by interacting agents (that is, agents in fundamental conflict might still agree to cooperate up to a certain point). A Unified Negotiation Protocol (UNP) is developed that can be used in all types of encounters. It is shown that in certain borderline cooperative situations, a partial cooperative agreement (i.e., one that does not achieve all agents' goals) might be preferred by all agents, even though there exists a rational agreement that would achieve all their goals. Finally, we analyze cases where agents have incomplete information on the goals and worth of other agents. First we consider the case where agents' goals are private information, and we analyze what goal declaration strategies the agents might adopt to increase their utility. Then, we consider the situation where the agents' goals (and therefore stand-alone costs) are common knowledge, but the worth they attach to their goals is private information. We introduce two mechanisms, one 'strict', the other 'tolerant', and analyze their affects on the stability and efficiency of negotiation outcomes.Comment: See http://www.jair.org/ for any accompanying file

    The Present and Future of Game Theory

    Get PDF
    A broad nontechnical coverage of many of the developments in game theory since the 1950s is given together with some comments on important open problems and where some of the developments may take place. The nearly 90 references given serve only as a minimal guide to the many thousands of books and articles that have been written. The purpose here is to present a broad brush picture of the many areas of study and application that have come into being. The use of deep techniques flourishes best when it stays in touch with application. There is a vital symbiotic relationship between good theory and practice. The breakneck speed of development of game theory calls for an appreciation of both the many realities of conflict, coordination and cooperation and the abstract investigation of all of them.Game theory, Application and theory, Social sciences, Law, Experimental gaming, conflict, Coordination and cooperation

    Evaluating Visual Conversational Agents via Cooperative Human-AI Games

    Full text link
    As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.Comment: HCOMP 201

    Collaborative action research for the governance of climate adaptation - foundations, conditions and pitfalls

    Get PDF
    This position paper serves as an introductory guide to designing and facilitating an action research process with stakeholders in the context of climate adaptation. Specifically, this is aimed at action researchers who are targeting at involving stakeholders and their expert knowledge in generating knowledge about their own condition and how it can be changed. The core philosophy of our research approach can be described as developing a powerful combination between practice-driven collaborative action research and theoretically-informed scientific research. Collaborative action research means that we take guidance from the hotspots as the primary source of questions, dilemmas and empirical data regarding the governance of adaptation, but also collaborate with them in testing insights and strategies, and evaluating their usefulness. The purpose is to develop effective, legitimate and resilient governance arrangements for climate adaptation. Scientific quality will be achieved by placing this co-production of knowledge in a well-founded and innovative theoretical framework, and through the involvement of the international consortium partners. This position paper provides a methodological starting point of the research program ‘Governance of Climate Adaptation’ and aims: · To clarify the theoretical foundation of collaborative action research and the underlying ontological and epistemological principles · To give an historical overview of the development of action research and its different forms · To enhance the theoretical foundation of collaborative action research in the specific context of governance of climate adaptation. · To translate the philosophy of collaborative action research into practical methods; · To give an overview of the main conditions and pitfalls for action research in complex governance settings Finally, this position paper provides three key instruminstruments developed to support Action Research in the hotspots: 1) Toolbox for AR in hotspots (chapter 6); 2) Set-up of a research design and action plan for AR in hotspots (chapter 7); 3) Quality checklist or guidance for AR in hotspots (chapter 8)
    corecore