51,723 research outputs found
Flows and Decompositions of Games: Harmonic and Potential Games
In this paper we introduce a novel flow representation for finite games in
strategic form. This representation allows us to develop a canonical direct sum
decomposition of an arbitrary game into three components, which we refer to as
the potential, harmonic and nonstrategic components. We analyze natural classes
of games that are induced by this decomposition, and in particular, focus on
games with no harmonic component and games with no potential component. We show
that the first class corresponds to the well-known potential games. We refer to
the second class of games as harmonic games, and study the structural and
equilibrium properties of this new class of games. Intuitively, the potential
component of a game captures interactions that can equivalently be represented
as a common interest game, while the harmonic part represents the conflicts
between the interests of the players. We make this intuition precise, by
studying the properties of these two classes, and show that indeed they have
quite distinct and remarkable characteristics. For instance, while finite
potential games always have pure Nash equilibria, harmonic games generically
never do. Moreover, we show that the nonstrategic component does not affect the
equilibria of a game, but plays a fundamental role in their efficiency
properties, thus decoupling the location of equilibria and their payoff-related
properties. Exploiting the properties of the decomposition framework, we obtain
explicit expressions for the projections of games onto the subspaces of
potential and harmonic games. This enables an extension of the properties of
potential and harmonic games to "nearby" games. We exemplify this point by
showing that the set of approximate equilibria of an arbitrary game can be
characterized through the equilibria of its projection onto the set of
potential games
Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard
One of the most appealing aspects of the (coarse) correlated equilibrium
concept is that natural dynamics quickly arrive at approximations of such
equilibria, even in games with many players. In addition, there exist
polynomial-time algorithms that compute exact (coarse) correlated equilibria.
In light of these results, a natural question is how good are the (coarse)
correlated equilibria that can arise from any efficient algorithm or dynamics.
In this paper we address this question, and establish strong negative
results. In particular, we show that in multiplayer games that have a succinct
representation, it is NP-hard to compute any coarse correlated equilibrium (or
approximate coarse correlated equilibrium) with welfare strictly better than
the worst possible. The focus on succinct games ensures that the underlying
complexity question is interesting; many multiplayer games of interest are in
fact succinct. Our results imply that, while one can efficiently compute a
coarse correlated equilibrium, one cannot provide any nontrivial welfare
guarantee for the resulting equilibrium, unless P=NP. We show that analogous
hardness results hold for correlated equilibria, and persist under the
egalitarian objective or Pareto optimality.
To complement the hardness results, we develop an algorithmic framework that
identifies settings in which we can efficiently compute an approximate
correlated equilibrium with near-optimal welfare. We use this framework to
develop an efficient algorithm for computing an approximate correlated
equilibrium with near-optimal welfare in aggregative games.Comment: 21 page
Attractive evolutionary equilibria
We present attractiveness, a refinement criterion for evolutionary equilibria. Equilibria surviving this criterion are robust to small perturbations of the underlying payoff system or the dynamics at hand. Furthermore, certain attractive equilibria are equivalent to others for certain evolutionary dynamics. For instance, each attractive evolutionarily stable strategy is an attractive evolutionarily stable equilibrium for certain barycentric ray-projection dynamics, and vice versa
Riemannian game dynamics
We study a class of evolutionary game dynamics defined by balancing a gain
determined by the game's payoffs against a cost of motion that captures the
difficulty with which the population moves between states. Costs of motion are
represented by a Riemannian metric, i.e., a state-dependent inner product on
the set of population states. The replicator dynamics and the (Euclidean)
projection dynamics are the archetypal examples of the class we study. Like
these representative dynamics, all Riemannian game dynamics satisfy certain
basic desiderata, including positive correlation and global convergence in
potential games. Moreover, when the underlying Riemannian metric satisfies a
Hessian integrability condition, the resulting dynamics preserve many further
properties of the replicator and projection dynamics. We examine the close
connections between Hessian game dynamics and reinforcement learning in normal
form games, extending and elucidating a well-known link between the replicator
dynamics and exponential reinforcement learning.Comment: 47 pages, 12 figures; added figures and further simplified the
derivation of the dynamic
The target projection dynamic
This paper studies the target projection dynamic, which is a model of myopic adjustment for population games. We put it into the standard microeconomic framework of utility maximization with control costs. We also show that it is well-behaved, since it satisfies the desirable properties: Nash stationarity, positive correlation, and existence, uniqueness, and continuity of solutions. We also show that, similarly to other well-behaved dynamics, a general result for elimination of strictly dominated strategies cannot be established. Instead we rule out survival of strictly dominated strategies in certain classes of games. We relate it to the projection dynamic, by showing that the two dynamics coincide in a subset of the strategy space. We show that strict equilibria, and evolutionarily stable strategies in games are asymptotically stable under the target projection dynamic. Finally, we show that the stability results that hold under the projection dynamic for stable games, hold under the target projection dynamic too, for interior Nash equilibria.target projection dynamic; noncooperative games; adjustment
Faster SDP hierarchy solvers for local rounding algorithms
Convex relaxations based on different hierarchies of linear/semi-definite
programs have been used recently to devise approximation algorithms for various
optimization problems. The approximation guarantee of these algorithms improves
with the number of {\em rounds} in the hierarchy, though the complexity of
solving (or even writing down the solution for) the 'th level program grows
as where is the input size.
In this work, we observe that many of these algorithms are based on {\em
local} rounding procedures that only use a small part of the SDP solution (of
size instead of ). We give an algorithm to
find the requisite portion in time polynomial in its size. The challenge in
achieving this is that the required portion of the solution is not fixed a
priori but depends on other parts of the solution, sometimes in a complicated
iterative manner.
Our solver leads to time algorithms to obtain the same
guarantees in many cases as the earlier time algorithms based on
rounds of the Lasserre hierarchy. In particular, guarantees based on rounds can be realized in polynomial time.
We develop and describe our algorithm in a fairly general abstract framework.
The main technical tool in our work, which might be of independent interest in
convex optimization, is an efficient ellipsoid algorithm based separation
oracle for convex programs that can output a {\em certificate of infeasibility
with restricted support}. This is used in a recursive manner to find a sequence
of consistent points in nested convex bodies that "fools" local rounding
algorithms.Comment: 30 pages, 8 figure
Position-Based Multi-Agent Dynamics for Real-Time Crowd Simulation (MiG paper)
Exploiting the efficiency and stability of Position-Based Dynamics (PBD), we
introduce a novel crowd simulation method that runs at interactive rates for
hundreds of thousands of agents. Our method enables the detailed modeling of
per-agent behavior in a Lagrangian formulation. We model short-range and
long-range collision avoidance to simulate both sparse and dense crowds. On the
particles representing agents, we formulate a set of positional constraints
that can be readily integrated into a standard PBD solver. We augment the
tentative particle motions with planning velocities to determine the preferred
velocities of agents, and project the positions onto the constraint manifold to
eliminate colliding configurations. The local short-range interaction is
represented with collision and frictional contact between agents, as in the
discrete simulation of granular materials. We incorporate a cohesion model for
modeling collective behaviors and propose a new constraint for dealing with
potential future collisions. Our new method is suitable for use in interactive
games.Comment: 9 page
Inertial game dynamics and applications to constrained optimization
Aiming to provide a new class of game dynamics with good long-term
rationality properties, we derive a second-order inertial system that builds on
the widely studied "heavy ball with friction" optimization method. By
exploiting a well-known link between the replicator dynamics and the
Shahshahani geometry on the space of mixed strategies, the dynamics are stated
in a Riemannian geometric framework where trajectories are accelerated by the
players' unilateral payoff gradients and they slow down near Nash equilibria.
Surprisingly (and in stark contrast to another second-order variant of the
replicator dynamics), the inertial replicator dynamics are not well-posed; on
the other hand, it is possible to obtain a well-posed system by endowing the
mixed strategy space with a different Hessian-Riemannian (HR) metric structure,
and we characterize those HR geometries that do so. In the single-agent version
of the dynamics (corresponding to constrained optimization over simplex-like
objects), we show that regular maximum points of smooth functions attract all
nearby solution orbits with low initial speed. More generally, we establish an
inertial variant of the so-called "folk theorem" of evolutionary game theory
and we show that strict equilibria are attracting in asymmetric
(multi-population) games - provided of course that the dynamics are well-posed.
A similar asymptotic stability result is obtained for evolutionarily stable
strategies in symmetric (single- population) games.Comment: 30 pages, 4 figures; significantly revised paper structure and added
new material on Euclidean embeddings and evolutionarily stable strategie
- …