27,826 research outputs found
Riemannian game dynamics
We study a class of evolutionary game dynamics defined by balancing a gain
determined by the game's payoffs against a cost of motion that captures the
difficulty with which the population moves between states. Costs of motion are
represented by a Riemannian metric, i.e., a state-dependent inner product on
the set of population states. The replicator dynamics and the (Euclidean)
projection dynamics are the archetypal examples of the class we study. Like
these representative dynamics, all Riemannian game dynamics satisfy certain
basic desiderata, including positive correlation and global convergence in
potential games. Moreover, when the underlying Riemannian metric satisfies a
Hessian integrability condition, the resulting dynamics preserve many further
properties of the replicator and projection dynamics. We examine the close
connections between Hessian game dynamics and reinforcement learning in normal
form games, extending and elucidating a well-known link between the replicator
dynamics and exponential reinforcement learning.Comment: 47 pages, 12 figures; added figures and further simplified the
derivation of the dynamic
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
Deep Q-Learning for Nash Equilibria: Nash-DQN
Model-free learning for multi-agent stochastic games is an active area of
research. Existing reinforcement learning algorithms, however, are often
restricted to zero-sum games, and are applicable only in small state-action
spaces or other simplified settings. Here, we develop a new data efficient
Deep-Q-learning methodology for model-free learning of Nash equilibria for
general-sum stochastic games. The algorithm uses a local linear-quadratic
expansion of the stochastic game, which leads to analytically solvable optimal
actions. The expansion is parametrized by deep neural networks to give it
sufficient flexibility to learn the environment without the need to experience
all state-action pairs. We study symmetry properties of the algorithm stemming
from label-invariant stochastic games and as a proof of concept, apply our
algorithm to learning optimal trading strategies in competitive electronic
markets.Comment: 16 pages, 4 figure
- …