22,802 research outputs found

    Payoff Information and Learning in Signaling Games

    Get PDF
    We add the assumption that players know their opponents' payoff functions and rationality to a model of non-equilibrium learning in signaling games. Agents are born into player roles and play against random opponents every period. Inexperienced agents are uncertain about the prevailing distribution of opponents' play, but believe that opponents never choose conditionally dominated strategies. Agents engage in active learning and update beliefs based on personal observations. Payoff information can refine or expand learning predictions, since patient young senders' experimentation incentives depend on which receiver responses they deem plausible. We show that with payoff knowledge, the limiting set of long-run learning outcomes is bounded above by rationality-compatible equilibria (RCE), and bounded below by uniform RCE. RCE refine the Intuitive Criterion (Cho and Kreps, 1987) and include all divine equilibria (Banks and Sobel, 1987). Uniform RCE sometimes but not always exists, and implies universally divine equilibrium.Comment: This material was previously part of a larger paper titled "Type-Compatible Equilibria in Signalling Games," which split into two smaller papers: "Learning and Type Compatibility in Signaling Games" and "Payoff Information and Learning in Signaling Games.

    On the robustness of learning in games with stochastically perturbed payoff observations

    Get PDF
    Motivated by the scarcity of accurate payoff feedback in practical applications of game theory, we examine a class of learning dynamics where players adjust their choices based on past payoff observations that are subject to noise and random disturbances. First, in the single-player case (corresponding to an agent trying to adapt to an arbitrarily changing environment), we show that the stochastic dynamics under study lead to no regret almost surely, irrespective of the noise level in the player's observations. In the multi-player case, we find that dominated strategies become extinct and we show that strict Nash equilibria are stochastically stable and attracting; conversely, if a state is stable or attracting with positive probability, then it is a Nash equilibrium. Finally, we provide an averaging principle for 2-player games, and we show that in zero-sum games with an interior equilibrium, time averages converge to Nash equilibrium for any noise level.Comment: 36 pages, 4 figure

    Statistical mechanics of spatial evolutionary games

    Full text link
    We discuss the long-run behavior of stochastic dynamics of many interacting players in spatial evolutionary games. In particular, we investigate the effect of the number of players and the noise level on the stochastic stability of Nash equilibria. We discuss similarities and differences between systems of interacting players maximizing their individual payoffs and particles minimizing their interaction energy. We use concepts and techniques of statistical mechanics to study game-theoretic models. In order to obtain results in the case of the so-called potential games, we analyze the thermodynamic limit of the appropriate models of interacting particles.Comment: 19 pages, to appear in J. Phys.

    Long-run equilibria, dominated strategies, and local interactions

    Get PDF
    The present note revisits a result by Kim and Wong (2010) showing that any strict Nash equilibrium of a coordination game can be supported as a long run equilibrium by properly adding dominated strategies. We show that in the circular city model of local interactions the selection of 1/2 -dominant strategies remains when adding strictly dominated strategies if interaction is decentral". Conversely, if the local interaction structure is central" by adding properly suited dominated strategies any equilibrium strategy of the original game can be supported as long run equilibrium. Classification- JEL: C72, D83

    Equilibrium Selection Through Incomplete Information in Coordination Games: An Experimental Study

    Get PDF
    We perform an experiment on a pure coordination game with uncertainty about the payoffs. Our game is closely related to models that have been used in many macroeconomic and financial applications to solve problems of equilibrium indeterminacy. In our experiment, each subject receives a noisy signal about the true payoffs. This game (inspired by the “global” games of Carlsson and van Damme, Econometrica, 61, 989–1018, 1993) has a unique strategy profile that survives the iterative deletion of strictly dominated strategies (thus a unique Nash equilibrium). The equilibrium outcome coincides, on average, with the risk-dominant equilibrium outcome of the underlying coordination game. In the baseline game, the behavior of the subjects converges to the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that this behavior can be explained by learning. To test this hypothesis, we use a different game with incomplete information, related to a complete information game where learning and prior experiments suggest a different behavior. Indeed, in the second treatment, the behavior did not converge to equilibrium within 50 periods in some of the sessions.We also run both games under complete information. The results are sufficiently similar between complete and incomplete information to suggest that risk-dominance is also an important part of the explanation.Publicad

    Survival of dominated strategies under evolutionary dynamics

    Get PDF
    We show that any evolutionary dynamic that satisfies three mild requirements— continuity, positive correlation, and innovation—does not eliminate strictly dominated strategies in all games. Likewise, we demonstrate that existing elimination results for evolutionary dynamics are not robust to small changes in the specifications of the dynamics
    corecore