23 research outputs found

    Learning Nash Equilibria

    Get PDF
    In the paper, we re-investigate the long run behavior of an adaptive learning process driven by the stochastic replicator dynamics developed by Fudenberg and Harris (1992). It is demonstrated that the Nash equilibrium will be the robust limit of the adaptive learning process as long as it is reachable for the learning dynamics in almost surely finite time. Doob’s martingale theory and Girsanov Theorem play very important roles in confirming the required assertion

    Testing the TASP: An Experimental Investigation of Learning in Games with Unstable Equilibria

    Get PDF
    We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under fictitious play like learning processes. We use two 4 x 4 games each with a unique mixed Nash equilibrium; one is stable and one is unstable under learning. Both games are versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. Nash equilibrium places a weight of 1/2 on Dumb in both games, but the TASP places no weight on Dumb when the equilibrium is unstable. We also vary the level of monetary payoffs with higher payoffs predicted to increase instability. We find that the high payoff unstable treatment differs from the others. Frequency of Dumb is lower and play is further from Nash than in the other treatments. That is, we find support for the comparative statics prediction of learning theory, although the frequency of Dumb is substantially greater than zero in the unstable treatments.games, experiments, TASP, learning, unstable, mixed equilibrium, fictitious play.

    Learning coordination through new actions

    Full text link
    We provide a novel approach to achieving a desired outcome in a coordination game: the original 2x2 game is embedded in a 2x3 game where one of the players may use a third action. For a large set of payoff values only one of the Nash equilibria of the original 2x2 game is stable under replicator dynamics. We show that this Nash equilibrium is the {\omega}-limit of all initial conditions in the interior of the state space for the modified 2x3 game. Thus, the existence of a third action for one of the players, although not used, allows both players to coordinate on one Nash equilibrium. This Nash equilibrium is the one preferred by, at least, the player with access to the new action. This approach deals with both coordination failure (players choose the payoff-dominant Nash equilibrium, if such a Nash equilibrium exists) and miscoordination (players do not use mixed strategies)

    Testing the TASP: An Experimental Investigation of Learning in Games with Unstable Equilibria

    Get PDF
    We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under a wide class of learning processes. We study two versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. The unique Nash equilibrium places a weight of 1/2 on Dumb in both games, but in one game the NE is stable, while in the other game the NE is unstable and the TASP places zero weight on Dumb. Consistent with TASP, we find that the frequency of Dumb is lower and play is further from Nash in the high payoff unstable treatment than in the other treatments. However, the frequency of Dumb is substantially greater than zero in all treatments.games, experiments, TASP, learning, unstable, mixed equilibrium, fictitious play

    Unified Convergence Proofs of Continuous-Time Fictitious Play

    Full text link

    Learning in Perturbed Asymmetric Games

    Get PDF
    We investigate the stability of mixed strategy equilibria in 2 person (bimatrix) games under perturbed best response dynamics. A mixed equilibrium is asymptotically stable under all such dynamics if and only if the game is linearly equivalent to a zero sum game. In this case, the mixed equilibrium is also globally asymptotically stable. Global convergence to the set of perturbed equilibria is shown also for (rescaled) partnership games, also known as potential games. Lastly, mixed equilibria of partnership games are shown to be always unstable under all dynamics of this class

    Learning Nash Equilibria

    Get PDF
    In the paper, we re-investigate the long run behavior of an adaptive learning process driven by the stochastic replicator dynamics developed by Fudenberg and Harris (1992). It is demonstrated that the Nash equilibrium will be the robust limit of the adaptive learning process as long as it is reachable for the learning dynamics in almost surely finite time. Doob’s martingale theory and Girsanov Theorem play very important roles in confirming the required assertion

    Testing the TASP: An Experimental Investigation of Learning in Games with Unstable Equilibria

    Get PDF
    We report experiments studying mixed strategy Nash equilibria that are theoretically stable or unstable under learning. The Time Average Shapley Polygon (TASP) predicts behavior in the unstable case. We study two versions of Rock-Paper-Scissors that include a fourth strategy, Dumb. The unique Nash equilibrium is identical in the two games, but the predicted frequency of Dumb is much higher in the game where the NE is stable. Consistent with TASP, the observed frequency of Dumb is lower and play is further from Nash in the high payoff unstable treatment. However, Dumb is played too frequently in all treatments

    A Simple Test of Learning Theory?

    Get PDF
    We report experiments designed to test the theoretical possibility, first discovered by Shapley (1964), that in some games learning fails to converge to any equilibrium, either in terms of marginal frequencies or of average play. Subjects played repeatedly in fixed pairings one of two 3 × 3 games, each having a unique Nash equilibrium in mixed strategies. The equilibrium of one game is predicted to be stable under learning, the other unstable, provided payoffs are sufficiently high. We ran each game in high and low payoff treatments. We find that, in all treatments, average play is close to equilibrium even though there are strong cycles present in the data
    corecore