A note on adjusted replicator dynamics in iterated games

Abstract

We establish how a rich collection of evolutionary games can arise as asymptotically exact descriptions of player strategies in iterated games. We consider arbitrary normal-form games that are iteratively played by players that observe their own payoffs after each round. Each player's strategy is assumed to depend only past actions and past payoffs of the player. We study a class of autonomous reinforcement-learning rules for such players and show that variants of the adjusted replicator dynamics are asymptotically exact approximations of player strategies for small values of a step-size parameter adopted in learning. We also obtain a convergence result that identifies when a stable equilibrium of the limit dynamics characterizes equilibrium behavior of player strategies.Adjusted replicator dynamics Reinforcement learning Stochastic approximations

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 06/07/2012