3,011 research outputs found

    Testing the TASP: An Experimental Investigation of Learning in Games with Unstable Equilibria

    Get PDF
    We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under fictitious play like learning processes. We use two 4 x 4 games each with a unique mixed Nash equilibrium; one is stable and one is unstable under learning. Both games are versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. Nash equilibrium places a weight of 1/2 on Dumb in both games, but the TASP places no weight on Dumb when the equilibrium is unstable. We also vary the level of monetary payoffs with higher payoffs predicted to increase instability. We find that the high payoff unstable treatment differs from the others. Frequency of Dumb is lower and play is further from Nash than in the other treatments. That is, we find support for the comparative statics prediction of learning theory, although the frequency of Dumb is substantially greater than zero in the unstable treatments.games, experiments, TASP, learning, unstable, mixed equilibrium, fictitious play.

    Testing the TASP: An Experimental Investigation of Learning in Games with Unstable Equilibria

    Get PDF
    We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under a wide class of learning processes. We study two versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. The unique Nash equilibrium places a weight of 1/2 on Dumb in both games, but in one game the NE is stable, while in the other game the NE is unstable and the TASP places zero weight on Dumb. Consistent with TASP, we find that the frequency of Dumb is lower and play is further from Nash in the high payoff unstable treatment than in the other treatments. However, the frequency of Dumb is substantially greater than zero in all treatments.games, experiments, TASP, learning, unstable, mixed equilibrium, fictitious play

    The effect of candidate quality on electoral equilibrium: An experimental study

    Get PDF
    When two candidates of different quality compete in a one-dimensional policy space, the equilibrium outcomes are asymmetric and do not correspond to the median. There are three main effects. First, the better candidate adopts more centrist policies than the worse candidate. Second, the equilibrium is statistical, in the sense that it predicts a probability distribution of outcomes rather than a single degenerate outcome. Third, the equilibrium varies systematically with the level of uncertainty about the location of the median voter. We test these three predictions using laboratory experiments and find strong support for all three. We also observe some biases and show that they can be explained by quantal response equilibrium

    Social conformity and equilibrium in pure strategies in games with many players

    Get PDF
    We introduce a framework of noncooperative pregames, in which players are characterized by their attributes, and demonstrate that for all games with sufficiently many players, there exist approximate (e )Nash equilibria in pure strategies. In fact, every mixed strategy equilibrium can be used to construct an e-equilibrium in pure strategies, an ‘e -purification’ result. Our main result is a social conformity theorem. Interpret a set of players, all with attributes in some convex subset of attribute space and all playing the same strategy, as a society. Observe that the number of societies may be as large as the number of players. Our social conformity result dictates that, given e > 0, there is an integer L, depending on e but not on the number of players, such that any suffciently large game has an e -equilibrium in pure strategies that induces a partition of the player set into fewer than L societies

    A Foundation for Markov Equilibria in Infinite Horizon Perfect Information Games

    Get PDF
    We study perfect information games with an infinite horizon played by an arbitrary number of players. This class of games includes infinitely repeated perfect information games, repeated games with asynchronous moves, games with long and short run players, games with overlapping generations of players, and canonical non-cooperative models of bargaining. We consider two restrictions on equilibria. An equilibrium is purifiable if close by behavior is consistent with equilibrium when agents’ payoffs at each node are perturbed additively and independently. An equilibrium has bounded recall if there exists K such that at most one player’s strategy depends on what happened more than K periods earlier. We show that only Markov equilibria have bounded memory and are purifiable. Thus if a game has at most one long-run player, all purifiable equilibria are Markov.Markov, bounded recall, purification

    Learning Nash Equilibria

    Get PDF
    In the paper, we re-investigate the long run behavior of an adaptive learning process driven by the stochastic replicator dynamics developed by Fudenberg and Harris (1992). It is demonstrated that the Nash equilibrium will be the robust limit of the adaptive learning process as long as it is reachable for the learning dynamics in almost surely finite time. Doob’s martingale theory and Girsanov Theorem play very important roles in confirming the required assertion

    Learning in Perturbed Asymmetric Games

    Get PDF
    We investigate the stability of mixed strategy equilibria in 2 person (bimatrix) games under perturbed best response dynamics. A mixed equilibrium is asymptotically stable under all such dynamics if and only if the game is linearly equivalent to a zero sum game. In this case, the mixed equilibrium is also globally asymptotically stable. Global convergence to the set of perturbed equilibria is shown also for (rescaled) partnership games, also known as potential games. Lastly, mixed equilibria of partnership games are shown to be always unstable under all dynamics of this class

    Efficient Online Quantum Generative Adversarial Learning Algorithms with Applications

    Full text link
    The exploration of quantum algorithms that possess quantum advantages is a central topic in quantum computation and quantum information processing. One potential candidate in this area is quantum generative adversarial learning (QuGAL), which conceptually has exponential advantages over classical adversarial networks. However, the corresponding learning algorithm remains obscured. In this paper, we propose the first quantum generative adversarial learning algorithm-- the quantum multiplicative matrix weight algorithm (QMMW)-- which enables the efficient processing of fundamental tasks. The computational complexity of QMMW is polynomially proportional to the number of training rounds and logarithmically proportional to the input size. The core concept of the proposed algorithm combines QuGAL with online learning. We exploit the implementation of QuGAL with parameterized quantum circuits, and numerical experiments for the task of entanglement test for pure state are provided to support our claims
    corecore