7,152 research outputs found

    Feature-weighted categorized play across symmetric games

    Get PDF
    Experimental game theory studies the behavior of agents who face a stream of one-shot games as a form of learning. Most literature focuses on a single recurring identical game. This paper embeds single-game learning in a broader perspective, where learning can take place across similar games. We posit that agents categorize games into a few classes and tend to play the same action within a class. The agent’s categories are generated by combining game features (payoffs) and individual motives. An individual categorization is experience-based, and may change over time. We demonstrate our approach by testing a robust (parameter-free) model over a large body of independent experimental evidence over 2 × 2 symmetric games. The model provides a very good fit across games, performing remarkably better than standard learning models

    The minority game: An economics perspective

    Get PDF
    This paper gives a critical account of the minority game literature. The minority game is a simple congestion game: players need to choose between two options, and those who have selected the option chosen by the minority win. The learning model proposed in this literature seems to differ markedly from the learning models commonly used in economics. We relate the learning model from the minority game literature to standard game-theoretic learning models, and show that in fact it shares many features with these models. However, the predictions of the learning model differ considerably from the predictions of most other learning models. We discuss the main predictions of the learning model proposed in the minority game literature, and compare these to experimental findings on congestion games.Comment: 30 pages, 4 figure

    Unbeatable Imitation

    Get PDF
    We show that for many classes of symmetric two-player games, the simple decision rule "imitate-the-best" can hardly be beaten by any other decision rule. We provide necessary and sufficient conditions for imitation to be unbeatable and show that it can only be beaten by much in games that are of the rock-scissors-paper variety. Thus, in many interesting examples, like 2x2 games, Cournot duopoly, price competition, rent seeking, public goods games, common pool resource games, minimum effort coordination games, arms race, search, bargaining, etc., imitation cannot be beaten by much even by a very clever opponent

    Query Complexity of Approximate Equilibria in Anonymous Games

    Full text link
    We study the computation of equilibria of anonymous games, via algorithms that may proceed via a sequence of adaptive queries to the game's payoff function, assumed to be unknown initially. The general topic we consider is \emph{query complexity}, that is, how many queries are necessary or sufficient to compute an exact or approximate Nash equilibrium. We show that exact equilibria cannot be found via query-efficient algorithms. We also give an example of a 2-strategy, 3-player anonymous game that does not have any exact Nash equilibrium in rational numbers. However, more positive query-complexity bounds are attainable if either further symmetries of the utility functions are assumed or we focus on approximate equilibria. We investigate four sub-classes of anonymous games previously considered by \cite{bfh09, dp14}. Our main result is a new randomized query-efficient algorithm that finds a O(n1/4)O(n^{-1/4})-approximate Nash equilibrium querying O~(n3/2)\tilde{O}(n^{3/2}) payoffs and runs in time O~(n3/2)\tilde{O}(n^{3/2}). This improves on the running time of pre-existing algorithms for approximate equilibria of anonymous games, and is the first one to obtain an inverse polynomial approximation in poly-time. We also show how this can be utilized as an efficient polynomial-time approximation scheme (PTAS). Furthermore, we prove that Ω(nlogn)\Omega(n \log{n}) payoffs must be queried in order to find any ϵ\epsilon-well-supported Nash equilibrium, even by randomized algorithms

    Experience-weighted Attraction Learning in Normal Form Games

    Get PDF
    In ‘experience-weighted attraction’ (EWA) learning, strategies have attractions that reflect initial predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ that weights the strength of hypothetical reinforcement of strategies that were not chosen according to the payoff they would have yielded, relative to reinforcement of chosen strategies according to received payoffs. The other key features are two discount rates, φ and ρ, which separately discount previous attractions, and an experience weight. EWA includes reinforcement learning and weighted fictitious play (belief learning) as special cases, and hybridizes their key elements. When δ= 0 and ρ= 0, cumulative choice reinforcement results. When δ= 1 and ρ=φ, levels of reinforcement of strategies are exactly the same as expected payoffs given weighted fictitious play beliefs. Using three sets of experimental data, parameter estimates of the model were calibrated on part of the data and used to predict a holdout sample. Estimates of δ are generally around .50, φ around .8 − 1, and ρ varies from 0 to φ. Reinforcement and belief-learning special cases are generally rejected in favor of EWA, though belief models do better in some constant-sum games. EWA is able to combine the best features of previous approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do

    Dynamical selection of Nash equilibria using Experience Weighted Attraction Learning: emergence of heterogeneous mixed equilibria

    Get PDF
    We study the distribution of strategies in a large game that models how agents choose among different double auction markets. We classify the possible mean field Nash equilibria, which include potentially segregated states where an agent population can split into subpopulations adopting different strategies. As the game is aggregative, the actual equilibrium strategy distributions remain undetermined, however. We therefore compare with the results of Experience-Weighted Attraction (EWA) learning, which at long times leads to Nash equilibria in the appropriate limits of large intensity of choice, low noise (long agent memory) and perfect imputation of missing scores (fictitious play). The learning dynamics breaks the indeterminacy of the Nash equilibria. Non-trivially, depending on how the relevant limits are taken, more than one type of equilibrium can be selected. These include the standard homogeneous mixed and heterogeneous pure states, but also \emph{heterogeneous mixed} states where different agents play different strategies that are not all pure. The analysis of the EWA learning involves Fokker-Planck modeling combined with large deviation methods. The theoretical results are confirmed by multi-agent simulations.Comment: 35 pages, 16 figure

    Contagion through learning

    Get PDF
    We study learning in a large class of complete information normal form games. Players continually face new strategic situations and must form beliefs by extrapolation from similar past situations. We characterize the long-run outcomes of learning in terms of iterated dominance in a related incomplete information game with subjective priors. The use of extrapolations in learning may generate contagion of actions across games even if players learn only from games with payoffs very close to the current ones. Contagion may lead to unique long-run outcomes where multiplicity would occur if players learned through repeatedly playing the same game. The process of contagion through learning is formally related to contagion in global games, although the outcomes generally differ.Similarity, learning, contagion, case-based reasoning, global games
    corecore