27,352 research outputs found
Approximating n-player behavioural strategy nash equilibria using coevolution
Coevolutionary algorithms are plagued with a set of problems related to intransitivity that make it questionable what the end product of a coevolutionary run can achieve. With the introduction of solution concepts into coevolution, part of the issue was alleviated, however efficiently representing and achieving game theoretic solution concepts is still not a trivial task. In this paper we propose a coevolutionary algorithm that approximates behavioural strategy Nash equilibria in n-player zero sum games, by exploiting the minimax solution concept. In order to support our case we provide a set of experiments in both games of known and unknown equilibria. In the case of known equilibria, we can confirm our algorithm converges to the known solution, while in the case of unknown equilibria we can see a steady progress towards Nash. Copyright 2011 ACM
THE EQUIVALENCE OF EVOLUTIONARY GAMES AND DISTRIBUTED MONTE CARLO LEARNING
This paper presents a tight relationship between evolutionary game theory and distributed intelligence models. After reviewing some existing theories of replicator dynamics and distributed Monte Carlo learning, we make formulations and proofs of the equivalence between these two models. The relationship will be revealed not only from a theoretical viewpoint, but also by experimental simulations of the models by taking a simple symmetric zero-sum game as an example. As a consequence, it will be verified that seemingly chaotic macro dynamics generated by distributed micro-decisions can be explained with theoretical models.Research Methods/ Statistical Methods,
Fast Approximate Max-n Monte Carlo Tree Search for Ms Pac-Man
We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude. © 2011 IEEE
Evolutionary games on graphs
Game theory is one of the key paradigms behind many scientific disciplines
from biology to behavioral sciences to economics. In its evolutionary form and
especially when the interacting agents are linked in a specific social network
the underlying solution concepts and methods are very similar to those applied
in non-equilibrium statistical physics. This review gives a tutorial-type
overview of the field for physicists. The first three sections introduce the
necessary background in classical and evolutionary game theory from the basic
definitions to the most important results. The fourth section surveys the
topological complications implied by non-mean-field-type social network
structures in general. The last three sections discuss in detail the dynamic
behavior of three prominent classes of models: the Prisoner's Dilemma, the
Rock-Scissors-Paper game, and Competing Associations. The major theme of the
review is in what sense and how the graph structure of interactions can modify
and enrich the picture of long term behavioral patterns emerging in
evolutionary games.Comment: Review, final version, 133 pages, 65 figure
Informed Proposal Monte Carlo
Any search or sampling algorithm for solution of inverse problems needs
guidance to be efficient. Many algorithms collect and apply information about
the problem on the fly, and much improvement has been made in this way.
However, as a consequence of the the No-Free-Lunch Theorem, the only way we can
ensure a significantly better performance of search and sampling algorithms is
to build in as much information about the problem as possible. In the special
case of Markov Chain Monte Carlo sampling (MCMC) we review how this is done
through the choice of proposal distribution, and we show how this way of adding
more information about the problem can be made particularly efficient when
based on an approximate physics model of the problem. A highly nonlinear
inverse scattering problem with a high-dimensional model space serves as an
illustration of the gain of efficiency through this approach
- …