1,953 research outputs found

    Equilibria, Fixed Points, and Complexity Classes

    Get PDF
    Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; market equilibria; computing optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analysing basic stochastic models for evolution like branching processes and for language like stochastic context-free grammars; and models that incorporate the basic primitives of probability and recursion like recursive Markov chains. It is not known whether these problems can be solved in polynomial time. There are certain common computational principles underlying different types of equilibria, which are captured by the complexity classes PLS, PPAD, and FIXP. Representative complete problems for these classes are respectively, pure Nash equilibria in games where they are guaranteed to exist, (mixed) Nash equilibria in 2-player normal form games, and (mixed) Nash equilibria in normal form games with 3 (or more) players. This paper reviews the underlying computational principles and the corresponding classes

    A Framework for Applied Dynamic Analysis in I.O.

    Get PDF
    This paper outlines a framework which computes and analyzes the equilibria from a class of dynamic games. The framework dates to Ericson and Pakes (1995), and allows for a finite number of heterogeneous firms, sequential investments with stochastic outcomes, and entry and exit. The equilibrium analyzed is a Markov Perfect equilibrium in the sense of Maskin and Tirole (1988). The simplest version of the framework is supported by a publically accessible computer program which computes equilibrium policies for user-specified primitives, and then analyzes the evolution of the industry from user-specified initial conditions. We begin by outlining the publically accessible framework. It allows for three types of competition in the spot market for current output (specified up to a set of parameter values set by the user), and has modules which allow the user to compare the industry structures generated by the Markov Perfect equilibrium to those that would be generated by a social planner and to those that would be generated by prefect collusion.' Next we review extensions that have been made to the simple framework. These were largely made by other authors who needed to enrich the framework so that it could be used to provide a realistic analysis of particular applied problems. The third section provides a simple way of evaluating the computational burden of the algorithm for a given set of primitives, and then shows that computational constraints are still binding in many applied situations. The last section reviews two computational algorithms designed to alleviate this computational constraint; one of which is based on functional form approximations and the other on learning techniques similar to those used in the artificial intelligence literature.

    Mean Field Equilibrium in Dynamic Games with Complementarities

    Full text link
    We study a class of stochastic dynamic games that exhibit strategic complementarities between players; formally, in the games we consider, the payoff of a player has increasing differences between her own state and the empirical distribution of the states of other players. Such games can be used to model a diverse set of applications, including network security models, recommender systems, and dynamic search in markets. Stochastic games are generally difficult to analyze, and these difficulties are only exacerbated when the number of players is large (as might be the case in the preceding examples). We consider an approximation methodology called mean field equilibrium to study these games. In such an equilibrium, each player reacts to only the long run average state of other players. We find necessary conditions for the existence of a mean field equilibrium in such games. Furthermore, as a simple consequence of this existence theorem, we obtain several natural monotonicity properties. We show that there exist a "largest" and a "smallest" equilibrium among all those where the equilibrium strategy used by a player is nondecreasing, and we also show that players converge to each of these equilibria via natural myopic learning dynamics; as we argue, these dynamics are more reasonable than the standard best response dynamics. We also provide sensitivity results, where we quantify how the equilibria of such games move in response to changes in parameters of the game (e.g., the introduction of incentives to players).Comment: 56 pages, 5 figure
    corecore