15 research outputs found
Courtesy as a Means to Coordinate
We investigate the problem of multi-agent coordination under rationality
constraints. Specifically, role allocation, task assignment, resource
allocation, etc. Inspired by human behavior, we propose a framework (CA^3NONY)
that enables fast convergence to efficient and fair allocations based on a
simple convention of courtesy. We prove that following such convention induces
a strategy which constitutes an -subgame-perfect equilibrium of the
repeated allocation game with discounting. Simulation results highlight the
effectiveness of CA^3NONY as compared to state-of-the-art bandit algorithms,
since it achieves more than two orders of magnitude faster convergence, higher
efficiency, fairness, and average payoff.Comment: Accepted at AAMAS 2019 (International Conference on Autonomous Agents
and Multiagent Systems
Belief and Truth in Hypothesised Behaviours
There is a long history in game theory on the topic of Bayesian or "rational"
learning, in which each player maintains beliefs over a set of alternative
behaviours, or types, for the other players. This idea has gained increasing
interest in the artificial intelligence (AI) community, where it is used as a
method to control a single agent in a system composed of multiple agents with
unknown behaviours. The idea is to hypothesise a set of types, each specifying
a possible behaviour for the other agents, and to plan our own actions with
respect to those types which we believe are most likely, given the observed
actions of the agents. The game theory literature studies this idea primarily
in the context of equilibrium attainment. In contrast, many AI applications
have a focus on task completion and payoff maximisation. With this perspective
in mind, we identify and address a spectrum of questions pertaining to belief
and truth in hypothesised types. We formulate three basic ways to incorporate
evidence into posterior beliefs and show when the resulting beliefs are
correct, and when they may fail to be correct. Moreover, we demonstrate that
prior beliefs can have a significant impact on our ability to maximise payoffs
in the long-term, and that they can be computed automatically with consistent
performance effects. Furthermore, we analyse the conditions under which we are
able complete our task optimally, despite inaccuracies in the hypothesised
types. Finally, we show how the correctness of hypothesised types can be
ascertained during the interaction via an automated statistical analysis.Comment: 44 pages; final manuscript published in Artificial Intelligence (AIJ
Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems
Much research in artificial intelligence is concerned with the development of
autonomous agents that can interact effectively with other agents. An important
aspect of such agents is the ability to reason about the behaviours of other
agents, by constructing models which make predictions about various properties
of interest (such as actions, goals, beliefs) of the modelled agents. A variety
of modelling approaches now exist which vary widely in their methodology and
underlying assumptions, catering to the needs of the different sub-communities
within which they were developed and reflecting the different practical uses
for which they are intended. The purpose of the present article is to provide a
comprehensive survey of the salient modelling methods which can be found in the
literature. The article concludes with a discussion of open problems which may
form the basis for fruitful future research.Comment: Final manuscript (46 pages), published in Artificial Intelligence
Journal. The arXiv version also contains a table of contents after the
abstract, but is otherwise identical to the AIJ version. Keywords: autonomous
agents, multiagent systems, modelling other agents, opponent modellin