7,560 research outputs found
Compact Representation of Value Function in Partially Observable Stochastic Games
Value methods for solving stochastic games with partial observability model
the uncertainty about states of the game as a probability distribution over
possible states. The dimension of this belief space is the number of states.
For many practical problems, for example in security, there are exponentially
many possible states which causes an insufficient scalability of algorithms for
real-world problems. To this end, we propose an abstraction technique that
addresses this issue of the curse of dimensionality by projecting
high-dimensional beliefs to characteristic vectors of significantly lower
dimension (e.g., marginal probabilities). Our two main contributions are (1)
novel compact representation of the uncertainty in partially observable
stochastic games and (2) novel algorithm based on this compact representation
that is based on existing state-of-the-art algorithms for solving stochastic
games with partial observability. Experimental evaluation confirms that the new
algorithm over the compact representation dramatically increases the
scalability compared to the state of the art
Deep Reinforcement Learning from Self-Play in Imperfect-Information Games
Many real-world applications can be described as large-scale games of
imperfect information. To deal with these challenging domains, prior work has
focused on computing Nash equilibria in a handcrafted abstraction of the
domain. In this paper we introduce the first scalable end-to-end approach to
learning approximate Nash equilibria without prior domain knowledge. Our method
combines fictitious self-play with deep reinforcement learning. When applied to
Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium,
whereas common reinforcement learning methods diverged. In Limit Texas Holdem,
a poker game of real-world scale, NFSP learnt a strategy that approached the
performance of state-of-the-art, superhuman algorithms based on significant
domain expertise.Comment: updated version, incorporating conference feedbac
Public Information Representation for Adversarial Team Games
The peculiarity of adversarial team games resides in the asymmetric
information available to the team members during the play, which makes the
equilibrium computation problem hard even with zero-sum payoffs. The algorithms
available in the literature work with implicit representations of the strategy
space and mainly resort to Linear Programming and column generation techniques
to enlarge incrementally the strategy space. Such representations prevent the
adoption of standard tools such as abstraction generation, game solving, and
subgame solving, which demonstrated to be crucial when solving huge, real-world
two-player zero-sum games. Differently from these works, we answer the question
of whether there is any suitable game representation enabling the adoption of
those tools. In particular, our algorithms convert a sequential team game with
adversaries to a classical two-player zero-sum game. In this converted game,
the team is transformed into a single coordinator player who only knows
information common to the whole team and prescribes to the players an action
for any possible private state. Interestingly, we show that our game is more
expressive than the original extensive-form game as any state/action
abstraction of the extensive-form game can be captured by our representation,
while the reverse does not hold. Due to the NP-hard nature of the problem, the
resulting Public Team game may be exponentially larger than the original one.
To limit this explosion, we provide three algorithms, each returning an
information-lossless abstraction that dramatically reduces the size of the
tree. These abstractions can be produced without generating the original game
tree. Finally, we show the effectiveness of the proposed approach by presenting
experimental results on Kuhn and Leduc Poker games, obtained by applying
state-of-art algorithms for two-player zero-sum games on the converted gamesComment: 19 pages, 7 figures, Best Paper Award in Cooperative AI Workshop at
NeurIPS 202
Multimedia search without visual analysis: the value of linguistic and contextual information
This paper addresses the focus of this special issue by analyzing the potential contribution of linguistic content and other non-image aspects to the processing of audiovisual data. It summarizes the various ways in which linguistic content analysis contributes to enhancing the semantic annotation of multimedia content, and, as a consequence, to improving the effectiveness of conceptual media access tools. A number of techniques are presented, including the time-alignment of textual resources, audio and speech processing, content reduction and reasoning tools, and the exploitation of surface features
The Hanabi Challenge: A New Frontier for AI Research
From the early days of computing, games have been important testbeds for
studying how well machines can do sophisticated decision making. In recent
years, machine learning has made dramatic advances with artificial agents
reaching superhuman performance in challenge domains like Go, Atari, and some
variants of poker. As with their predecessors of chess, checkers, and
backgammon, these game domains have driven research by providing sophisticated
yet well-defined challenges for artificial intelligence practitioners. We
continue this tradition by proposing the game of Hanabi as a new challenge
domain with novel problems that arise from its combination of purely
cooperative gameplay with two to five players and imperfect information. In
particular, we argue that Hanabi elevates reasoning about the beliefs and
intentions of other agents to the foreground. We believe developing novel
techniques for such theory of mind reasoning will not only be crucial for
success in Hanabi, but also in broader collaborative efforts, especially those
with human partners. To facilitate future research, we introduce the
open-source Hanabi Learning Environment, propose an experimental framework for
the research community to evaluate algorithmic advances, and assess the
performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence
- âŠ