56 research outputs found
Cycles in adversarial regularized learning
Regularized learning is a fundamental technique in online optimization,
machine learning and many other fields of computer science. A natural question
that arises in these settings is how regularized learning algorithms behave
when faced against each other. We study a natural formulation of this problem
by coupling regularized learning dynamics in zero-sum games. We show that the
system's behavior is Poincar\'e recurrent, implying that almost every
trajectory revisits any (arbitrarily small) neighborhood of its starting point
infinitely often. This cycling behavior is robust to the agents' choice of
regularization mechanism (each agent could be using a different regularizer),
to positive-affine transformations of the agents' utilities, and it also
persists in the case of networked competition, i.e., for zero-sum polymatrix
games.Comment: 22 pages, 4 figure
A General Framework for Computing Optimal Correlated Equilibria in Compact Games
We analyze the problem of computing a correlated equilibrium that optimizes
some objective (e.g., social welfare). Papadimitriou and Roughgarden [2008]
gave a sufficient condition for the tractability of this problem; however, this
condition only applies to a subset of existing representations. We propose a
different algorithmic approach for the optimal CE problem that applies to all
compact representations, and give a sufficient condition that generalizes that
of Papadimitriou and Roughgarden. In particular, we reduce the optimal CE
problem to the deviation-adjusted social welfare problem, a combinatorial
optimization problem closely related to the optimal social welfare problem.
This framework allows us to identify new classes of games for which the optimal
CE problem is tractable; we show that graphical polymatrix games on tree graphs
are one example. We also study the problem of computing the optimal coarse
correlated equilibrium, a solution concept closely related to CE. Using a
similar approach we derive a sufficient condition for this problem, and use it
to prove that the problem is tractable for singleton congestion games.Comment: 14 pages. Short version to appear in WINE 201
Tree Polymatrix Games Are PPAD-Hard.
We prove that it is PPAD-hard to compute a Nash equilibrium in a tree polymatrix game with twenty actions per player. This is the first PPAD hardness result for a game with a constant number of actions per player where the interaction graph is acyclic. Along the way we show PPAD-hardness for finding an -fixed point of a 2D LinearFIXP instance, when is any constant less than . This lifts the hardness regime from polynomially small approximations in -dimensions to constant approximations in two-dimensions, and our constant is substantial when compared to the trivial upper bound of
Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard
One of the most appealing aspects of the (coarse) correlated equilibrium
concept is that natural dynamics quickly arrive at approximations of such
equilibria, even in games with many players. In addition, there exist
polynomial-time algorithms that compute exact (coarse) correlated equilibria.
In light of these results, a natural question is how good are the (coarse)
correlated equilibria that can arise from any efficient algorithm or dynamics.
In this paper we address this question, and establish strong negative
results. In particular, we show that in multiplayer games that have a succinct
representation, it is NP-hard to compute any coarse correlated equilibrium (or
approximate coarse correlated equilibrium) with welfare strictly better than
the worst possible. The focus on succinct games ensures that the underlying
complexity question is interesting; many multiplayer games of interest are in
fact succinct. Our results imply that, while one can efficiently compute a
coarse correlated equilibrium, one cannot provide any nontrivial welfare
guarantee for the resulting equilibrium, unless P=NP. We show that analogous
hardness results hold for correlated equilibria, and persist under the
egalitarian objective or Pareto optimality.
To complement the hardness results, we develop an algorithmic framework that
identifies settings in which we can efficiently compute an approximate
correlated equilibrium with near-optimal welfare. We use this framework to
develop an efficient algorithm for computing an approximate correlated
equilibrium with near-optimal welfare in aggregative games.Comment: 21 page
Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games
The predominant paradigm in evolutionary game theory and more generally
online learning in games is based on a clear distinction between a population
of dynamic agents that interact given a fixed, static game. In this paper, we
move away from the artificial divide between dynamic agents and static games,
to introduce and analyze a large class of competitive settings where both the
agents and the games they play evolve strategically over time. We focus on
arguably the most archetypal game-theoretic setting -- zero-sum games (as well
as network generalizations) -- and the most studied evolutionary learning
dynamic -- replicator, the continuous-time analogue of multiplicative weights.
Populations of agents compete against each other in a zero-sum competition that
itself evolves adversarially to the current population mixture. Remarkably,
despite the chaotic coevolution of agents and games, we prove that the system
exhibits a number of regularities. First, the system has conservation laws of
an information-theoretic flavor that couple the behavior of all agents and
games. Secondly, the system is Poincar\'{e} recurrent, with effectively all
possible initializations of agents and games lying on recurrent orbits that
come arbitrarily close to their initial conditions infinitely often. Thirdly,
the time-average agent behavior and utility converge to the Nash equilibrium
values of the time-average game. Finally, we provide a polynomial time
algorithm to efficiently predict this time-average behavior for any such
coevolving network game.Comment: To appear in AAAI 202
Approximating Nash Equilibria in Normal-Form Games via Stochastic Optimization
We propose the first, to our knowledge, loss function for approximate Nash
equilibria of normal-form games that is amenable to unbiased Monte Carlo
estimation. This construction allows us to deploy standard non-convex
stochastic optimization techniques for approximating Nash equilibria, resulting
in novel algorithms with provable guarantees. We complement our theoretical
analysis with experiments demonstrating that stochastic gradient descent can
outperform previous state-of-the-art approaches
- …