8,185 research outputs found

    Efficient cooperation by exchanging favors

    Get PDF
    We study chip-strategy equilibria in two-player repeated games. Intuitively, in these equilibria players exchange favors by taking individually suboptimal actions if these actions create a "gain" for the opponent larger than the player's "loss" from taking them. In exchange, the player who provides a favor implicitly obtains from the opponent a chip that entitles the player to receiving a favor at some future date. Players are initially endowed with a number of chips, and a player who runs out of chips is no longer entitled to receive any favors until she provides a favor to the opponent, in which case she receives one chip back.We show that such simple chip strategies approximate efficient outcomes in a class of repeated symmetric games with incomplete information, in which each player has two possible types, when discounting vanishes. This class includes many important applications, studied in numerous previous papers, such as favor exchange model of Mobius (2001), repeated auctions, andthe repeated version of Spulber duopolies of Athey and Bagwell (2001), among others. We also show the limitation of chip strategies. For example, if players have more than two types, then such simple chip strategies may not approximate efficient outcomes even in symmetric games

    Computing Approximate Nash Equilibria in Polymatrix Games

    Full text link
    In an ϵ\epsilon-Nash equilibrium, a player can gain at most ϵ\epsilon by unilaterally changing his behaviour. For two-player (bimatrix) games with payoffs in [0,1][0,1], the best-knownϵ\epsilon achievable in polynomial time is 0.3393. In general, for nn-player games an ϵ\epsilon-Nash equilibrium can be computed in polynomial time for an ϵ\epsilon that is an increasing function of nn but does not depend on the number of strategies of the players. For three-player and four-player games the corresponding values of ϵ\epsilon are 0.6022 and 0.7153, respectively. Polymatrix games are a restriction of general nn-player games where a player's payoff is the sum of payoffs from a number of bimatrix games. There exists a very small but constant ϵ\epsilon such that computing an ϵ\epsilon-Nash equilibrium of a polymatrix game is \PPAD-hard. Our main result is that a (0.5+δ)(0.5+\delta)-Nash equilibrium of an nn-player polymatrix game can be computed in time polynomial in the input size and 1δ\frac{1}{\delta}. Inspired by the algorithm of Tsaknakis and Spirakis, our algorithm uses gradient descent on the maximum regret of the players. We also show that this algorithm can be applied to efficiently find a (0.5+δ)(0.5+\delta)-Nash equilibrium in a two-player Bayesian game

    Equilibria, Fixed Points, and Complexity Classes

    Get PDF
    Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; market equilibria; computing optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analysing basic stochastic models for evolution like branching processes and for language like stochastic context-free grammars; and models that incorporate the basic primitives of probability and recursion like recursive Markov chains. It is not known whether these problems can be solved in polynomial time. There are certain common computational principles underlying different types of equilibria, which are captured by the complexity classes PLS, PPAD, and FIXP. Representative complete problems for these classes are respectively, pure Nash equilibria in games where they are guaranteed to exist, (mixed) Nash equilibria in 2-player normal form games, and (mixed) Nash equilibria in normal form games with 3 (or more) players. This paper reviews the underlying computational principles and the corresponding classes

    Algorithms for generalized potential games with mixed-integer variables

    Get PDF
    We consider generalized potential games, that constitute a fundamental subclass of generalized Nash equilibrium problems. We propose different methods to compute solutions of generalized potential games with mixed-integer variables, i.e., games in which some variables are continuous while the others are discrete. We investigate which types of equilibria of the game can be computed by minimizing a potential function over the common feasible set. In particular, for a wide class of generalized potential games, we characterize those equilibria that can be computed by minimizing potential functions as Pareto solutions of a particular multi-objective problem, and we show how different potential functions can be used to select equilibria. We propose a new Gauss–Southwell algorithm to compute approximate equilibria of any generalized potential game with mixed-integer variables. We show that this method converges in a finite number of steps and we also give an upper bound on this number of steps. Moreover, we make a thorough analysis on the behaviour of approximate equilibria with respect to exact ones. Finally, we make many numerical experiments to show the viability of the proposed approaches

    Complexity Theory, Game Theory, and Economics: The Barbados Lectures

    Full text link
    This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics," taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory; and (ii) to illustrate how game-theoretic questions have led to new and interesting complexity theory, including recent several breakthroughs. It consists of two five-lecture sequences: the Solar Lectures, focusing on the communication and computational complexity of computing equilibria; and the Lunar Lectures, focusing on applications of complexity theory in game theory and economics. No background in game theory is assumed.Comment: Revised v2 from December 2019 corrects some errors in and adds some recent citations to v1 Revised v3 corrects a few typos in v
    corecore