17,201 research outputs found

    Aspiration Dynamics of Multi-player Games in Finite Populations

    Full text link
    Studying strategy update rules in the framework of evolutionary game theory, one can differentiate between imitation processes and aspiration-driven dynamics. In the former case, individuals imitate the strategy of a more successful peer. In the latter case, individuals adjust their strategies based on a comparison of their payoffs from the evolutionary game to a value they aspire, called the level of aspiration. Unlike imitation processes of pairwise comparison, aspiration-driven updates do not require additional information about the strategic environment and can thus be interpreted as being more spontaneous. Recent work has mainly focused on understanding how aspiration dynamics alter the evolutionary outcome in structured populations. However, the baseline case for understanding strategy selection is the well-mixed population case, which is still lacking sufficient understanding. We explore how aspiration-driven strategy-update dynamics under imperfect rationality influence the average abundance of a strategy in multi-player evolutionary games with two strategies. We analytically derive a condition under which a strategy is more abundant than the other in the weak selection limiting case. This approach has a long standing history in evolutionary game and is mostly applied for its mathematical approachability. Hence, we also explore strong selection numerically, which shows that our weak selection condition is a robust predictor of the average abundance of a strategy. The condition turns out to differ from that of a wide class of imitation dynamics, as long as the game is not dyadic. Therefore a strategy favored under imitation dynamics can be disfavored under aspiration dynamics. This does not require any population structure thus highlights the intrinsic difference between imitation and aspiration dynamics

    Little Information, Efficiency, and Learning - An Experimental Study

    Get PDF
    Earlier experiments have shown that under little information subjects are hardly able to coordinate even though there are no conflicting interests and subjects are organised in fixed pairs. This is so, even though a simple adjustment process would lead the subjects into the efficient, fair and individually payoff maximising outcome. We draw on this finding and design an experiment in which subjects re-peatedly play 4 simple games within 4 sets of 40 rounds under little information. This way we are able to investigate (i) the coordination abilities of the subjects depending on the underlying game, (ii) the resulting efficiency loss, and (iii) the adjustment of the learning rule.mutual fate control, matching pennies, fate-control behaviour- control, learning, coordination, little information

    Experience-weighted Attraction Learning in Normal Form Games

    Get PDF
    In ‘experience-weighted attraction’ (EWA) learning, strategies have attractions that reflect initial predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ that weights the strength of hypothetical reinforcement of strategies that were not chosen according to the payoff they would have yielded, relative to reinforcement of chosen strategies according to received payoffs. The other key features are two discount rates, φ and ρ, which separately discount previous attractions, and an experience weight. EWA includes reinforcement learning and weighted fictitious play (belief learning) as special cases, and hybridizes their key elements. When δ= 0 and ρ= 0, cumulative choice reinforcement results. When δ= 1 and ρ=φ, levels of reinforcement of strategies are exactly the same as expected payoffs given weighted fictitious play beliefs. Using three sets of experimental data, parameter estimates of the model were calibrated on part of the data and used to predict a holdout sample. Estimates of δ are generally around .50, φ around .8 − 1, and ρ varies from 0 to φ. Reinforcement and belief-learning special cases are generally rejected in favor of EWA, though belief models do better in some constant-sum games. EWA is able to combine the best features of previous approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do

    Endogenous Selection of Aspiring and Rational rules in Coordination Games

    Get PDF
    The paper studies an evolutionary model where players from a given population are randomly matched in pairs each period to play a co-ordination game. At each instant, a player can choose to adopt one of the two possible behavior rules, called the rational rule and the aspiring rule, and then take actions prescribed by the chosen rule. The choice between the two rules depends upon their relative performance in the immediate past. We show that there are two stable long run outcomes where either the rational rule becomes extinct and all players in the population achieve full eciency, or that both the behavior rules co-exist and there is only a partial use of ecient strategies in the population. These ndings support the use of the aspiration driven behavior in several existing studies and also help us take a comparative evolutionary look at the two rules in retrospect.

    Endogenous selection of aspiring and rational rules in coordination games

    Get PDF
    The paper studies an evolutionary model where players from a given population are randomly matched in pairs each period to play a co- ordination game. At each instant, a player can choose to adopt one of the two possible behavior rules, called the rational rule and the as- piring rule, and then take actions prescribed by the chosen rule. The choice between the two rules depends upon their relative performance in the immediate past. We show that there are two stable long run outcomes where either the rational rule becomes extinct and all play- ers in the population achieve full eciency, or that both the behavior rules co-exist and there is only a partial use of ecient strategies in the population. These ndings support the use of the aspiration driven behavior in several existing studies and also help us take a comparative evolutionary look at the two rules in retrospect.Co-evolution, Aspirations, Best-response, Random matching, Coordination games

    Valuation equilibrium

    Get PDF
    We introduce a new solution concept for games in extensive form with perfect information, valuation equilibrium, which is based on a partition of each player's moves into similarity classes. A valuation of a player'is a real-valued function on the set of her similarity classes. In this equilibrium each player's strategy is optimal in the sense that at each of her nodes, a player chooses a move that belongs to a class with maximum valuation. The valuation of each player is consistent with the strategy profile in the sense that the valuation of a similarity class is the player's expected payoff, given that the path (induced by the strategy profile) intersects the similarity class. The solution concept is applied to decision problems and multi-player extensive form games. It is contrasted with existing solution concepts. The valuation approach is next applied to stopping games, in which non-terminal moves form a single similarity class, and we note that the behaviors obtained echo some biases observed experimentally. Finally, we tentatively suggest a way of endogenizing the similarity partitions in which moves are categorized according to how well they perform relative to the expected equilibrium value, interpreted as the aspiration level
    corecore