13,345 research outputs found

    Game-theoretical control with continuous action sets

    Full text link
    Motivated by the recent applications of game-theoretical learning techniques to the design of distributed control systems, we study a class of control problems that can be formulated as potential games with continuous action sets, and we propose an actor-critic reinforcement learning algorithm that provably converges to equilibrium in this class of problems. The method employed is to analyse the learning process under study through a mean-field dynamical system that evolves in an infinite-dimensional function space (the space of probability distributions over the players' continuous controls). To do so, we extend the theory of finite-dimensional two-timescale stochastic approximation to an infinite-dimensional, Banach space setting, and we prove that the continuous dynamics of the process converge to equilibrium in the case of potential games. These results combine to give a provably-convergent learning algorithm in which players do not need to keep track of the controls selected by the other agents.Comment: 19 page

    Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics

    Get PDF
    Evolutionary game dynamics is one of the most fruitful frameworks for studying evolution in different disciplines, from Biology to Economics. Within this context, the approach of choice for many researchers is the so-called replicator equation, that describes mathematically the idea that those individuals performing better have more offspring and thus their frequency in the population grows. While very many interesting results have been obtained with this equation in the three decades elapsed since it was first proposed, it is important to realize the limits of its applicability. One particularly relevant issue in this respect is that of non-mean-field effects, that may arise from temporal fluctuations or from spatial correlations, both neglected in the replicator equation. This review discusses these temporal and spatial effects focusing on the non-trivial modifications they induce when compared to the outcome of replicator dynamics. Alongside this question, the hypothesis of linearity and its relation to the choice of the rule for strategy update is also analyzed. The discussion is presented in terms of the emergence of cooperation, as one of the current key problems in Biology and in other disciplines.Comment: Review, 48 pages, 26 figure

    Best-response Dynamics in Zero-sum Stochastic Games

    Get PDF
    We define and analyse three learning dynamics for two-player zero-sum discounted-payoff stochastic games. A continuous-time best-response dynamic in mixed strategies is proved to converge to the set of Nash equilibrium stationary strategies. Extending this, we introduce a fictitious-play-like process in a continuous-time embedding of a stochastic zero-sum game, which is again shown to converge to the set of Nash equilibrium strategies. Finally, we present a modified δ-converging best-response dynamic, in which the discount rate converges to 1, and the learned value converges to the asymptotic value of the zero-sum stochastic game. The critical feature of all the dynamic processes is a separation of adaption rates: beliefs about the value of states adapt more slowly than the strategies adapt, and in the case of the δ-converging dynamic the discount rate adapts more slowly than everything else

    Inclusive Cognitive Hierarchy

    Get PDF
    Cognitive hierarchy theory, a collection of structural models of non-equilibrium thinking, in which players' best responses rely on heterogeneous beliefs on others' strategies including naive behavior, proved powerful in explaining observations from a wide range of games. We introduce an inclusive cognitive hierarchy model, in which players do not rule out the possibility of facing opponents at their own thinking level. Our theoretical results show that inclusiveness is crucial for asymptotic properties of deviations from equilibrium behavior in expansive games. We show that the limiting behaviors are categorized in three distinct types: naive, Savage rational with inconsistent beliefs, and sophisticated. We test the model in a laboratory experiment of collective decision-making. The data suggests that inclusiveness is indispensable with regard to explanatory power of the models of hierarchical thinking.Series: Department of Strategy and Innovation Working Paper Serie

    A formula for the value of a stochastic game

    Full text link
    In 1953, Lloyd Shapley defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that competitive stochastic games have a discounted value. In 1982, Jean-Fran\c{c}ois Mertens and Abraham Neyman proved that competitive stochastic games admit a robust solution concept, the value, which is equal to the limit of the discounted values as the discount rate goes to 0. Both contributions were published in PNAS. In the present paper, we provide a tractable formula for the value of competitive stochastic games
    • …
    corecore