1,340 research outputs found
Node-Max-Cut and the Complexity of Equilibrium in Linear Weighted Congestion Games
In this work, we seek a more refined understanding of the complexity of local optimum computation for Max-Cut and pure Nash equilibrium (PNE) computation for congestion games with weighted players and linear latency functions. We show that computing a PNE of linear weighted congestion games is PLS-complete either for very restricted strategy spaces, namely when player strategies are paths on a series-parallel network with a single origin and destination, or for very restricted latency functions, namely when the latency on each resource is equal to the congestion. Our results reveal a remarkable gap regarding the complexity of PNE in congestion games with weighted and unweighted players, since in case of unweighted players, a PNE can be easily computed by either a simple greedy algorithm (for series-parallel networks) or any better response dynamics (when the latency is equal to the congestion). For the latter of the results above, we need to show first that computing a local optimum of a natural restriction of Max-Cut, which we call Node-Max-Cut, is PLS-complete. In Node-Max-Cut, the input graph is vertex-weighted and the weight of each edge is equal to the product of the weights of its endpoints. Due to the very restricted nature of Node-Max-Cut, the reduction requires a careful combination of new gadgets with ideas and techniques from previous work. We also show how to compute efficiently a (1+?)-approximate equilibrium for Node-Max-Cut, if the number of different vertex weights is constant
Smoothed Efficient Algorithms and Reductions for Network Coordination Games
Worst-case hardness results for most equilibrium computation problems have
raised the need for beyond-worst-case analysis. To this end, we study the
smoothed complexity of finding pure Nash equilibria in Network Coordination
Games, a PLS-complete problem in the worst case. This is a potential game where
the sequential-better-response algorithm is known to converge to a pure NE,
albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial)
smoothed complexity when the underlying game graph is a complete (resp.
arbitrary) graph, and every player has constantly many strategies. We note that
the complete graph case is reminiscent of perturbing all parameters, a common
assumption in most known smoothed analysis results.
Second, we define a notion of smoothness-preserving reduction among search
problems, and obtain reductions from -strategy network coordination games to
local-max-cut, and from -strategy games (with arbitrary ) to
local-max-cut up to two flips. The former together with the recent result of
[BCC18] gives an alternate -time smoothed algorithm for the
-strategy case. This notion of reduction allows for the extension of
smoothed efficient algorithms from one problem to another.
For the first set of results, we develop techniques to bound the probability
that an (adversarial) better-response sequence makes slow improvements on the
potential. Our approach combines and generalizes the local-max-cut approaches
of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful
definition of the matrix which captures the increase in potential, a tighter
union bound on adversarial sequences, and balancing it with good enough rank
bounds. We believe that the approach and notions developed herein could be of
interest in addressing the smoothed complexity of other potential and/or
congestion games
Equilibria, Fixed Points, and Complexity Classes
Many models from a variety of areas involve the computation of an equilibrium
or fixed point of some kind. Examples include Nash equilibria in games; market
equilibria; computing optimal strategies and the values of competitive games
(stochastic and other games); stable configurations of neural networks;
analysing basic stochastic models for evolution like branching processes and
for language like stochastic context-free grammars; and models that incorporate
the basic primitives of probability and recursion like recursive Markov chains.
It is not known whether these problems can be solved in polynomial time. There
are certain common computational principles underlying different types of
equilibria, which are captured by the complexity classes PLS, PPAD, and FIXP.
Representative complete problems for these classes are respectively, pure Nash
equilibria in games where they are guaranteed to exist, (mixed) Nash equilibria
in 2-player normal form games, and (mixed) Nash equilibria in normal form games
with 3 (or more) players. This paper reviews the underlying computational
principles and the corresponding classes
The complexity of pure nash equilibria in max-congestion games
We study Network Max-Congestion Games (NMC games, for short), a
class of network games where each player tries to minimize the most congested
edge along the path he uses as strategy. We focus our study on the complexity
of computing a pure Nash equilibria in this kind of games. We show that, for
single-commodity games with non-decreasing delay functions, this problem
is in P when either all the paths from the source to the target node are
disjoint or all the delay functions are equal. For the general case, we prove
that the computation of a PNE belongs to the complexity class PLS through a
new technique based on generalized ordinal potential functions and a slightly
modified definition of the usual local search neighborhood. We further apply
this technique to a different class of games (which we call Pareto-efficient)
with restricted cost functions. Finally, we also prove some PLS-hardness
results, showing that computing a PNE for Pareto-efficient NMC games is
indeed a PLS-complete problem
Efficient Local Search in Coordination Games on Graphs
We study strategic games on weighted directed graphs, where the payoff of a
player is defined as the sum of the weights on the edges from players who chose
the same strategy augmented by a fixed non-negative bonus for picking a given
strategy. These games capture the idea of coordination in the absence of
globally common strategies. Prior work shows that the problem of determining
the existence of a pure Nash equilibrium for these games is NP-complete already
for graphs with all weights equal to one and no bonuses. However, for several
classes of graphs (e.g. DAGs and cliques) pure Nash equilibria or even strong
equilibria always exist and can be found by simply following a particular
improvement or coalition-improvement path, respectively. In this paper we
identify several natural classes of graphs for which a finite improvement or
coalition-improvement path of polynomial length always exists, and, as a
consequence, a Nash equilibrium or strong equilibrium in them can be found in
polynomial time. We also argue that these results are optimal in the sense that
in natural generalisations of these classes of graphs, a pure Nash equilibrium
may not even exist.Comment: Extended version of a paper accepted to IJCAI1
Approximate Pure Nash Equilibria in Weighted Congestion Games: Existence, Efficient Computation, and Structure
We consider structural and algorithmic questions related to the Nash dynamics
of weighted congestion games. In weighted congestion games with linear latency
functions, the existence of (pure Nash) equilibria is guaranteed by potential
function arguments. Unfortunately, this proof of existence is inefficient and
computing equilibria is such games is a {\sf PLS}-hard problem. The situation
gets worse when superlinear latency functions come into play; in this case, the
Nash dynamics of the game may contain cycles and equilibria may not even exist.
Given these obstacles, we consider approximate equilibria as alternative
solution concepts. Do such equilibria exist? And if so, can we compute them
efficiently?
We provide positive answers to both questions for weighted congestion games
with polynomial latency functions by exploiting an "approximation" of such
games by a new class of potential games that we call -games. This allows
us to show that these games have -approximate equilibria, where is the
maximum degree of the latency functions. Our main technical contribution is an
efficient algorithm for computing O(1)-approximate equilibria when is a
constant. For games with linear latency functions, the approximation guarantee
is for arbitrarily small ; for
latency functions with maximum degree , it is . The
running time is polynomial in the number of bits in the representation of the
game and . As a byproduct of our techniques, we also show the
following structural statement for weighted congestion games with polynomial
latency functions of maximum degree : polynomially-long sequences of
best-response moves from any initial state to a -approximate
equilibrium exist and can be efficiently identified in such games as long as
is constant.Comment: 31 page
Computing better approximate pure Nash equilibria in cut games via semidefinite programming
Cut games are among the most fundamental strategic games in algorithmic game
theory. It is well-known that computing an exact pure Nash equilibrium in these
games is PLS-hard, so research has focused on computing approximate equilibria.
We present a polynomial-time algorithm that computes -approximate pure
Nash equilibria in cut games. This is the first improvement to the previously
best-known bound of , due to the work of Bhalgat, Chakraborty, and Khanna
from EC 2010. Our algorithm is based on a general recipe proposed by
Caragiannis, Fanelli, Gravin, and Skopalik from FOCS 2011 and applied on
several potential games since then. The first novelty of our work is the
introduction of a phase that can identify subsets of players who can
simultaneously improve their utilities considerably. This is done via
semidefinite programming and randomized rounding. In particular, a negative
objective value to the semidefinite program guarantees that no such
considerable improvement is possible for a given set of players. Otherwise,
randomized rounding of the SDP solution is used to identify a set of players
who can simultaneously improve their strategies considerably and allows the
algorithm to make progress. The way rounding is performed is another important
novelty of our work. Here, we exploit an idea that dates back to a paper by
Feige and Goemans from 1995, but we take it to an extreme that has not been
analyzed before
- âŚ