11,741 research outputs found
Human-Agent Decision-making: Combining Theory and Practice
Extensive work has been conducted both in game theory and logic to model
strategic interaction. An important question is whether we can use these
theories to design agents for interacting with people? On the one hand, they
provide a formal design specification for agent strategies. On the other hand,
people do not necessarily adhere to playing in accordance with these
strategies, and their behavior is affected by a multitude of social and
psychological factors. In this paper we will consider the question of whether
strategies implied by theories of strategic behavior can be used by automated
agents that interact proficiently with people. We will focus on automated
agents that we built that need to interact with people in two negotiation
settings: bargaining and deliberation. For bargaining we will study game-theory
based equilibrium agents and for argumentation we will discuss logic-based
argumentation theory. We will also consider security games and persuasion games
and will discuss the benefits of using equilibrium based agents.Comment: In Proceedings TARK 2015, arXiv:1606.0729
The Hanabi Challenge: A New Frontier for AI Research
From the early days of computing, games have been important testbeds for
studying how well machines can do sophisticated decision making. In recent
years, machine learning has made dramatic advances with artificial agents
reaching superhuman performance in challenge domains like Go, Atari, and some
variants of poker. As with their predecessors of chess, checkers, and
backgammon, these game domains have driven research by providing sophisticated
yet well-defined challenges for artificial intelligence practitioners. We
continue this tradition by proposing the game of Hanabi as a new challenge
domain with novel problems that arise from its combination of purely
cooperative gameplay with two to five players and imperfect information. In
particular, we argue that Hanabi elevates reasoning about the beliefs and
intentions of other agents to the foreground. We believe developing novel
techniques for such theory of mind reasoning will not only be crucial for
success in Hanabi, but also in broader collaborative efforts, especially those
with human partners. To facilitate future research, we introduce the
open-source Hanabi Learning Environment, propose an experimental framework for
the research community to evaluate algorithmic advances, and assess the
performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence
Recommended from our members
Agent Decision-Making in Open Mixed Networks
Computer systems increasingly carry out tasks in mixed networks, that is in group settings in which they interact both with other computer systems and with people. Participants in these heterogeneous human-computer groups vary in their capabilities, goals, and strategies; they may cooperate, collaborate, or compete. The presence of people in mixed networks raises challenges for the design and the evaluation of decision-making strategies for computer agents. This paper describes several new decision-making models that represent, learn and adapt to various social attributes that influence people's decision-making and presents a novel approach to evaluating such models. It identifies a range of social attributes in an open-network setting that influence people's decision-making and thus affect the performance of computer-agent strategies, and establishes the importance of learning and adaptation to the success of such strategies. The settings vary in the capabilities, goals, and strategies that people bring into their interactions. The studies deploy a configurable system called Colored Trails (CT) that generates a family of games. CT is an abstract, conceptually simple but highly versatile game in which players negotiate and exchange resources to enable them to achieve their individual or group goals. It provides a realistic analogue to multi-agent task domains, while not requiring extensive domain modeling. It is less abstract than payoff matrices, and people exhibit less strategic and more helpful behavior in CT than in the identical payoff matrix decision-making context. By not requiring extensive domain modeling, CT enables agent researchers to focus their attention on strategy design, and it provides an environment in which the influence of social factors can be better isolated and studied.Engineering and Applied Science
LLM-Based Agent Society Investigation: Collaboration and Confrontation in Avalon Gameplay
This paper aims to investigate the open research problem of uncovering the
social behaviors of LLM-based agents. To achieve this goal, we adopt Avalon, a
representative communication game, as the environment and use system prompts to
guide LLM agents to play the game. While previous studies have conducted
preliminary investigations into gameplay with LLM agents, there lacks research
on their social behaviors. In this paper, we present a novel framework designed
to seamlessly adapt to Avalon gameplay. The core of our proposed framework is a
multi-agent system that enables efficient communication and interaction among
agents. We evaluate the performance of our framework based on metrics from two
perspectives: winning the game and analyzing the social behaviors of LLM
agents. Our results demonstrate the effectiveness of our framework in
generating adaptive and intelligent agents and highlight the potential of
LLM-based agents in addressing the challenges associated with dynamic social
environment interaction. By analyzing the social behaviors of LLM agents from
the aspects of both collaboration and confrontation, we provide insights into
the research and applications of this domain
Mechanisms for Automated Negotiation in State Oriented Domains
This paper lays part of the groundwork for a domain theory of negotiation,
that is, a way of classifying interactions so that it is clear, given a domain,
which negotiation mechanisms and strategies are appropriate. We define State
Oriented Domains, a general category of interaction. Necessary and sufficient
conditions for cooperation are outlined. We use the notion of worth in an
altered definition of utility, thus enabling agreements in a wider class of
joint-goal reachable situations. An approach is offered for conflict
resolution, and it is shown that even in a conflict situation, partial
cooperative steps can be taken by interacting agents (that is, agents in
fundamental conflict might still agree to cooperate up to a certain point). A
Unified Negotiation Protocol (UNP) is developed that can be used in all types
of encounters. It is shown that in certain borderline cooperative situations, a
partial cooperative agreement (i.e., one that does not achieve all agents'
goals) might be preferred by all agents, even though there exists a rational
agreement that would achieve all their goals. Finally, we analyze cases where
agents have incomplete information on the goals and worth of other agents.
First we consider the case where agents' goals are private information, and we
analyze what goal declaration strategies the agents might adopt to increase
their utility. Then, we consider the situation where the agents' goals (and
therefore stand-alone costs) are common knowledge, but the worth they attach to
their goals is private information. We introduce two mechanisms, one 'strict',
the other 'tolerant', and analyze their affects on the stability and efficiency
of negotiation outcomes.Comment: See http://www.jair.org/ for any accompanying file
The Present and Future of Game Theory
A broad nontechnical coverage of many of the developments in game theory since the 1950s is given together with some comments on important open problems and where some of the developments may take place. The nearly 90 references given serve only as a minimal guide to the many thousands of books and articles that have been written. The purpose here is to present a broad brush picture of the many areas of study and application that have come into being. The use of deep techniques flourishes best when it stays in touch with application. There is a vital symbiotic relationship between good theory and practice. The breakneck speed of development of game theory calls for an appreciation of both the many realities of conflict, coordination and cooperation and the abstract investigation of all of them.Game theory, Application and theory, Social sciences, Law, Experimental gaming, conflict, Coordination and cooperation
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
As AI continues to advance, human-AI teams are inevitable. However, progress
in AI is routinely measured in isolation, without a human in the loop. It is
crucial to benchmark progress in AI, not just in isolation, but also in terms
of how it translates to helping humans perform certain tasks, i.e., the
performance of human-AI teams.
In this work, we design a cooperative game - GuessWhich - to measure human-AI
team performance in the specific context of the AI being a visual
conversational agent. GuessWhich involves live interaction between the human
and the AI. The AI, which we call ALICE, is provided an image which is unseen
by the human. Following a brief description of the image, the human questions
ALICE about this secret image to identify it from a fixed pool of images.
We measure performance of the human-ALICE team by the number of guesses it
takes the human to correctly identify the secret image after a fixed number of
dialog rounds with ALICE. We compare performance of the human-ALICE teams for
two versions of ALICE. Our human studies suggest a counterintuitive trend -
that while AI literature shows that one version outperforms the other when
paired with an AI questioner bot, we find that this improvement in AI-AI
performance does not translate to improved human-AI performance. This suggests
a mismatch between benchmarking of AI in isolation and in the context of
human-AI teams.Comment: HCOMP 201
Collaborative action research for the governance of climate adaptation - foundations, conditions and pitfalls
This position paper serves as an introductory guide to designing and facilitating an action research process with stakeholders in the context of climate adaptation. Specifically, this is aimed at action researchers who are targeting at involving stakeholders and their expert knowledge in generating knowledge about their own condition and how it can be changed. The core philosophy of our research approach can be described as developing a powerful combination between practice-driven collaborative action research and theoretically-informed scientific research. Collaborative action research means that we take guidance from the hotspots as the primary source of questions, dilemmas and empirical data regarding the governance of adaptation, but also collaborate with them in testing insights and strategies, and evaluating their usefulness. The purpose is to develop effective, legitimate and resilient governance arrangements for climate adaptation. Scientific quality will be achieved by placing this co-production of knowledge in a well-founded and innovative theoretical framework, and through the involvement of the international consortium partners. This position paper provides a methodological starting point of the research program ‘Governance of Climate Adaptation’ and aims: · To clarify the theoretical foundation of collaborative action research and the underlying ontological and epistemological principles · To give an historical overview of the development of action research and its different forms · To enhance the theoretical foundation of collaborative action research in the specific context of governance of climate adaptation. · To translate the philosophy of collaborative action research into practical methods; · To give an overview of the main conditions and pitfalls for action research in complex governance settings Finally, this position paper provides three key instruminstruments developed to support Action Research in the hotspots: 1) Toolbox for AR in hotspots (chapter 6); 2) Set-up of a research design and action plan for AR in hotspots (chapter 7); 3) Quality checklist or guidance for AR in hotspots (chapter 8)
- …