40,041 research outputs found
Cooperative Epistemic Multi-Agent Planning for Implicit Coordination
Epistemic planning can be used for decision making in multi-agent situations
with distributed knowledge and capabilities. Recently, Dynamic Epistemic Logic
(DEL) has been shown to provide a very natural and expressive framework for
epistemic planning. We extend the DEL-based epistemic planning framework to
include perspective shifts, allowing us to define new notions of sequential and
conditional planning with implicit coordination. With these, it is possible to
solve planning tasks with joint goals in a decentralized manner without the
agents having to negotiate about and commit to a joint policy at plan time.
First we define the central planning notions and sketch the implementation of a
planning system built on those notions. Afterwards we provide some case studies
in order to evaluate the planner empirically and to show that the concept is
useful for multi-agent systems in practice.Comment: In Proceedings M4M9 2017, arXiv:1703.0173
Mechanisms for Automated Negotiation in State Oriented Domains
This paper lays part of the groundwork for a domain theory of negotiation,
that is, a way of classifying interactions so that it is clear, given a domain,
which negotiation mechanisms and strategies are appropriate. We define State
Oriented Domains, a general category of interaction. Necessary and sufficient
conditions for cooperation are outlined. We use the notion of worth in an
altered definition of utility, thus enabling agreements in a wider class of
joint-goal reachable situations. An approach is offered for conflict
resolution, and it is shown that even in a conflict situation, partial
cooperative steps can be taken by interacting agents (that is, agents in
fundamental conflict might still agree to cooperate up to a certain point). A
Unified Negotiation Protocol (UNP) is developed that can be used in all types
of encounters. It is shown that in certain borderline cooperative situations, a
partial cooperative agreement (i.e., one that does not achieve all agents'
goals) might be preferred by all agents, even though there exists a rational
agreement that would achieve all their goals. Finally, we analyze cases where
agents have incomplete information on the goals and worth of other agents.
First we consider the case where agents' goals are private information, and we
analyze what goal declaration strategies the agents might adopt to increase
their utility. Then, we consider the situation where the agents' goals (and
therefore stand-alone costs) are common knowledge, but the worth they attach to
their goals is private information. We introduce two mechanisms, one 'strict',
the other 'tolerant', and analyze their affects on the stability and efficiency
of negotiation outcomes.Comment: See http://www.jair.org/ for any accompanying file
The Hanabi Challenge: A New Frontier for AI Research
From the early days of computing, games have been important testbeds for
studying how well machines can do sophisticated decision making. In recent
years, machine learning has made dramatic advances with artificial agents
reaching superhuman performance in challenge domains like Go, Atari, and some
variants of poker. As with their predecessors of chess, checkers, and
backgammon, these game domains have driven research by providing sophisticated
yet well-defined challenges for artificial intelligence practitioners. We
continue this tradition by proposing the game of Hanabi as a new challenge
domain with novel problems that arise from its combination of purely
cooperative gameplay with two to five players and imperfect information. In
particular, we argue that Hanabi elevates reasoning about the beliefs and
intentions of other agents to the foreground. We believe developing novel
techniques for such theory of mind reasoning will not only be crucial for
success in Hanabi, but also in broader collaborative efforts, especially those
with human partners. To facilitate future research, we introduce the
open-source Hanabi Learning Environment, propose an experimental framework for
the research community to evaluate algorithmic advances, and assess the
performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence
Multi-Agent Cooperation for Particle Accelerator Control
We present practical investigations in a real industrial controls environment
for justifying theoretical DAI (Distributed Artificial Intelligence) results,
and we discuss theoretical aspects of practical investigations for
accelerator control and operation. A generalized hypothesis is introduced,
based on a unified view of control, monitoring, diagnosis, maintenance and
repair tasks leading to a general method of cooperation for expert systems
by exchanging hypotheses. This has been tested for task and result sharing
cooperation scenarios. Generalized hypotheses also allow us to treat the
repetitive diagnosis-recovery cycle as task sharing cooperation. Problems
with such a loop or even recursive calls between the different agents are
discussed
- …