18 research outputs found
Higher Order Game Dynamics
Continuous-time game dynamics are typically first order systems where payoffs
determine the growth rate of the players' strategy shares. In this paper, we
investigate what happens beyond first order by viewing payoffs as higher order
forces of change, specifying e.g. the acceleration of the players' evolution
instead of its velocity (a viewpoint which emerges naturally when it comes to
aggregating empirical data of past instances of play). To that end, we derive a
wide class of higher order game dynamics, generalizing first order imitative
dynamics, and, in particular, the replicator dynamics. We show that strictly
dominated strategies become extinct in n-th order payoff-monotonic dynamics n
orders as fast as in the corresponding first order dynamics; furthermore, in
stark contrast to first order, weakly dominated strategies also become extinct
for n>1. All in all, higher order payoff-monotonic dynamics lead to the
elimination of weakly dominated strategies, followed by the iterated deletion
of strictly dominated strategies, thus providing a dynamic justification of the
well-known epistemic rationalizability process of Dekel and Fudenberg (1990).
Finally, we also establish a higher order analogue of the folk theorem of
evolutionary game theory, and we show that con- vergence to strict equilibria
in n-th order dynamics is n orders as fast as in first order.Comment: 32 pages, 6 figures; to appear in the Journal of Economic Theory.
Updated material on the microfoundations of the dynamic
Higher Order Games Dynamics
Le PDF est la version auteurContinuous-time game dynamics are typically first order systems where payoffs determine the growth rate of the playersʼ strategy shares. In this paper, we investigate what happens beyond first order by viewing payoffs as higher order forces of change, specifying e.g. the acceleration of the playersʼ evolution instead of its velocity (a viewpoint which emerges naturally when it comes to aggregating empirical data of past instances of play). To that end, we derive a wide class of higher order game dynamics, generalizing first order imitative dynamics, and, in particular, the replicator dynamics. We show that strictly dominated strategies become extinct in n-th order payoff-monotonic dynamics n orders as fast as in the corresponding first order dynamics; furthermore, in stark contrast to first order, weakly dominated strategies also become extinct for n⩾2. All in all, higher order payoff-monotonic dynamics lead to the elimination of weakly dominated strategies, followed by the iterated deletion of strictly dominated strategies, thus providing a dynamic justification of the well-known epistemic rationalizability process of Dekel and Fudenberg [7]. Finally, we also establish a higher order analogue of the folk theorem of evolutionary game theory, and we show that convergence to strict equilibria in n-th order dynamics is n orders as fast as in first order
Inertial game dynamics and applications to constrained optimization
Aiming to provide a new class of game dynamics with good long-term
rationality properties, we derive a second-order inertial system that builds on
the widely studied "heavy ball with friction" optimization method. By
exploiting a well-known link between the replicator dynamics and the
Shahshahani geometry on the space of mixed strategies, the dynamics are stated
in a Riemannian geometric framework where trajectories are accelerated by the
players' unilateral payoff gradients and they slow down near Nash equilibria.
Surprisingly (and in stark contrast to another second-order variant of the
replicator dynamics), the inertial replicator dynamics are not well-posed; on
the other hand, it is possible to obtain a well-posed system by endowing the
mixed strategy space with a different Hessian-Riemannian (HR) metric structure,
and we characterize those HR geometries that do so. In the single-agent version
of the dynamics (corresponding to constrained optimization over simplex-like
objects), we show that regular maximum points of smooth functions attract all
nearby solution orbits with low initial speed. More generally, we establish an
inertial variant of the so-called "folk theorem" of evolutionary game theory
and we show that strict equilibria are attracting in asymmetric
(multi-population) games - provided of course that the dynamics are well-posed.
A similar asymptotic stability result is obtained for evolutionarily stable
strategies in symmetric (single- population) games.Comment: 30 pages, 4 figures; significantly revised paper structure and added
new material on Euclidean embeddings and evolutionarily stable strategie
Penalty-regulated dynamics and robust learning procedures in games
Starting from a heuristic learning scheme for N-person games, we derive a new
class of continuous-time learning dynamics consisting of a replicator-like
drift adjusted by a penalty term that renders the boundary of the game's
strategy space repelling. These penalty-regulated dynamics are equivalent to
players keeping an exponentially discounted aggregate of their on-going payoffs
and then using a smooth best response to pick an action based on these
performance scores. Owing to this inherent duality, the proposed dynamics
satisfy a variant of the folk theorem of evolutionary game theory and they
converge to (arbitrarily precise) approximations of Nash equilibria in
potential games. Motivated by applications to traffic engineering, we exploit
this duality further to design a discrete-time, payoff-based learning algorithm
which retains these convergence properties and only requires players to observe
their in-game payoffs: moreover, the algorithm remains robust in the presence
of stochastic perturbations and observation errors, and it does not require any
synchronization between players.Comment: 33 pages, 3 figure
Distributed strategy-updating rules for aggregative games of multi-integrator systems with coupled constraints
In this paper, we explore aggregative games over networks of multi-integrator
agents with coupled constraints. To reach the general Nash equilibrium of an
aggregative game, a distributed strategy-updating rule is proposed by a
combination of the coordination of Lagrange multipliers and the estimation of
the aggregator. Each player has only access to partial-decision information and
communicates with his neighbors in a weight-balanced digraph which
characterizes players' preferences as to the values of information received
from neighbors. We first consider networks of double-integrator agents and then
focus on multi-integrator agents. The effectiveness of the proposed
strategy-updating rules is demonstrated by analyzing the convergence of
corresponding dynamical systems via the Lyapunov stability theory, singular
perturbation theory and passive theory. Numerical examples are given to
illustrate our results.Comment: 9 pages, 4 figure
Imitation Dynamics with Payoff Shocks
We investigate the impact of payoff shocks on the evolution of large
populations of myopic players that employ simple strategy revision protocols
such as the "imitation of success". In the noiseless case, this process is
governed by the standard (deterministic) replicator dynamics; in the presence
of noise however, the induced stochastic dynamics are different from previous
versions of the stochastic replicator dynamics (such as the aggregate-shocks
model of Fudenberg and Harris, 1992). In this context, we show that strict
equilibria are always stochastically asymptotically stable, irrespective of the
magnitude of the shocks; on the other hand, in the high-noise regime,
non-equilibrium states may also become stochastically asymptotically stable and
dominated strategies may survive in perpetuity (they become extinct if the
noise is low). Such behavior is eliminated if players are less myopic and
revise their strategies based on their cumulative payoffs. In this case, we
obtain a second order stochastic dynamical system whose attracting states
coincide with the game's strict equilibria and where dominated strategies
become extinct (a.s.), no matter the noise level.Comment: 25 page