1,088 research outputs found

    Approximate Well-supported Nash Equilibria below Two-thirds

    Get PDF
    In an epsilon-Nash equilibrium, a player can gain at most epsilon by changing his behaviour. Recent work has addressed the question of how best to compute epsilon-Nash equilibria, and for what values of epsilon a polynomial-time algorithm exists. An epsilon-well-supported Nash equilibrium (epsilon-WSNE) has the additional requirement that any strategy that is used with non-zero probability by a player must have payoff at most epsilon less than the best response. A recent algorithm of Kontogiannis and Spirakis shows how to compute a 2/3-WSNE in polynomial time, for bimatrix games. Here we introduce a new technique that leads to an improvement to the worst-case approximation guarantee

    Polylogarithmic Supports are required for Approximate Well-Supported Nash Equilibria below 2/3

    Get PDF
    In an epsilon-approximate Nash equilibrium, a player can gain at most epsilon in expectation by unilateral deviation. An epsilon well-supported approximate Nash equilibrium has the stronger requirement that every pure strategy used with positive probability must have payoff within epsilon of the best response payoff. Daskalakis, Mehta and Papadimitriou conjectured that every win-lose bimatrix game has a 2/3-well-supported Nash equilibrium that uses supports of cardinality at most three. Indeed, they showed that such an equilibrium will exist subject to the correctness of a graph-theoretic conjecture. Regardless of the correctness of this conjecture, we show that the barrier of a 2/3 payoff guarantee cannot be broken with constant size supports; we construct win-lose games that require supports of cardinality at least Omega((log n)^(1/3)) in any epsilon-well supported equilibrium with epsilon < 2/3. The key tool in showing the validity of the construction is a proof of a bipartite digraph variant of the well-known Caccetta-Haggkvist conjecture. A probabilistic argument shows that there exist epsilon-well-supported equilibria with supports of cardinality O(log n/(epsilon^2)), for any epsilon> 0; thus, the polylogarithmic cardinality bound presented cannot be greatly improved. We also show that for any delta > 0, there exist win-lose games for which no pair of strategies with support sizes at most two is a (1-delta)-well-supported Nash equilibrium. In contrast, every bimatrix game with payoffs in [0,1] has a 1/2-approximate Nash equilibrium where the supports of the players have cardinality at most two.Comment: Added details on related work (footnote 7 expanded

    Approximate well-supported Nash equilibria in symmetric bimatrix games

    Full text link
    The ε\varepsilon-well-supported Nash equilibrium is a strong notion of approximation of a Nash equilibrium, where no player has an incentive greater than ε\varepsilon to deviate from any of the pure strategies that she uses in her mixed strategy. The smallest constant ε\varepsilon currently known for which there is a polynomial-time algorithm that computes an ε\varepsilon-well-supported Nash equilibrium in bimatrix games is slightly below 2/32/3. In this paper we study this problem for symmetric bimatrix games and we provide a polynomial-time algorithm that gives a (1/2+δ)(1/2+\delta)-well-supported Nash equilibrium, for an arbitrarily small positive constant δ\delta

    Large Supports are required for Well-Supported Nash Equilibria

    Get PDF
    We prove that for any constant kk and any ϵ<1\epsilon<1, there exist bimatrix win-lose games for which every ϵ\epsilon-WSNE requires supports of cardinality greater than kk. To do this, we provide a graph-theoretic characterization of win-lose games that possess ϵ\epsilon-WSNE with constant cardinality supports. We then apply a result in additive number theory of Haight to construct win-lose games that do not satisfy the requirements of the characterization. These constructions disprove graph theoretic conjectures of Daskalakis, Mehta and Papadimitriou, and Myers

    An Empirical Study of Finding Approximate Equilibria in Bimatrix Games

    Full text link
    While there have been a number of studies about the efficacy of methods to find exact Nash equilibria in bimatrix games, there has been little empirical work on finding approximate Nash equilibria. Here we provide such a study that compares a number of approximation methods and exact methods. In particular, we explore the trade-off between the quality of approximate equilibrium and the required running time to find one. We found that the existing library GAMUT, which has been the de facto standard that has been used to test exact methods, is insufficient as a test bed for approximation methods since many of its games have pure equilibria or other easy-to-find good approximate equilibria. We extend the breadth and depth of our study by including new interesting families of bimatrix games, and studying bimatrix games upto size 2000×20002000 \times 2000. Finally, we provide new close-to-worst-case examples for the best-performing algorithms for finding approximate Nash equilibria

    Distributed Methods for Computing Approximate Equilibria

    Get PDF
    We present a new, distributed method to compute approximate Nash equilibria in bimatrix games. In contrast to previous approaches that analyze the two payoff matrices at the same time (for example, by solving a single LP that combines the two players payoffs), our algorithm first solves two independent LPs, each of which is derived from one of the two payoff matrices, and then compute approximate Nash equilibria using only limited communication between the players. Our method has several applications for improved bounds for efficient computations of approximate Nash equilibria in bimatrix games. First, it yields a best polynomial-time algorithm for computing \emph{approximate well-supported Nash equilibria (WSNE)}, which guarantees to find a 0.6528-WSNE in polynomial time. Furthermore, since our algorithm solves the two LPs separately, it can be used to improve upon the best known algorithms in the limited communication setting: the algorithm can be implemented to obtain a randomized expected-polynomial-time algorithm that uses poly-logarithmic communication and finds a 0.6528-WSNE. The algorithm can also be carried out to beat the best known bound in the query complexity setting, requiring O(nlogn)O(n \log n) payoff queries to compute a 0.6528-WSNE. Finally, our approach can also be adapted to provide the best known communication efficient algorithm for computing \emph{approximate Nash equilibria}: it uses poly-logarithmic communication to find a 0.382-approximate Nash equilibrium

    Computing Approximate Nash Equilibria in Polymatrix Games

    Full text link
    In an ϵ\epsilon-Nash equilibrium, a player can gain at most ϵ\epsilon by unilaterally changing his behaviour. For two-player (bimatrix) games with payoffs in [0,1][0,1], the best-knownϵ\epsilon achievable in polynomial time is 0.3393. In general, for nn-player games an ϵ\epsilon-Nash equilibrium can be computed in polynomial time for an ϵ\epsilon that is an increasing function of nn but does not depend on the number of strategies of the players. For three-player and four-player games the corresponding values of ϵ\epsilon are 0.6022 and 0.7153, respectively. Polymatrix games are a restriction of general nn-player games where a player's payoff is the sum of payoffs from a number of bimatrix games. There exists a very small but constant ϵ\epsilon such that computing an ϵ\epsilon-Nash equilibrium of a polymatrix game is \PPAD-hard. Our main result is that a (0.5+δ)(0.5+\delta)-Nash equilibrium of an nn-player polymatrix game can be computed in time polynomial in the input size and 1δ\frac{1}{\delta}. Inspired by the algorithm of Tsaknakis and Spirakis, our algorithm uses gradient descent on the maximum regret of the players. We also show that this algorithm can be applied to efficiently find a (0.5+δ)(0.5+\delta)-Nash equilibrium in a two-player Bayesian game

    Approximate Equilibrium and Incentivizing Social Coordination

    Full text link
    We study techniques to incentivize self-interested agents to form socially desirable solutions in scenarios where they benefit from mutual coordination. Towards this end, we consider coordination games where agents have different intrinsic preferences but they stand to gain if others choose the same strategy as them. For non-trivial versions of our game, stable solutions like Nash Equilibrium may not exist, or may be socially inefficient even when they do exist. This motivates us to focus on designing efficient algorithms to compute (almost) stable solutions like Approximate Equilibrium that can be realized if agents are provided some additional incentives. Our results apply in many settings like adoption of new products, project selection, and group formation, where a central authority can direct agents towards a strategy but agents may defect if they have better alternatives. We show that for any given instance, we can either compute a high quality approximate equilibrium or a near-optimal solution that can be stabilized by providing small payments to some players. We then generalize our model to encompass situations where player relationships may exhibit complementarities and present an algorithm to compute an Approximate Equilibrium whose stability factor is linear in the degree of complementarity. Our results imply that a little influence is necessary in order to ensure that selfish players coordinate and form socially efficient solutions.Comment: A preliminary version of this work will appear in AAAI-14: Twenty-Eighth Conference on Artificial Intelligenc
    corecore