869 research outputs found

    An experimental study of costly coordination

    Get PDF
    This paper reports data for coordination game experiments with random matching. The experimental design is based on changes in an effort-cost parameter, which do not alter the set of Nash equilibria nor do they alter the predictions of adjustment theories based on imitation or best response dynamics. As expected, however, increasing the effort cost lowers effort levels. Maximization of a stochastic potential function, a concept that generalizes risk dominance to continuous games, predicts this reduction in efforts. An error parameter estimated from initial two-person, minimum-effort games is used to predict behavior in other three-person coordination games

    An Experimental Study of Costly Coordination

    Get PDF
    This paper reports data for coordination game experiments with random matching. The experimental design is based on changes in an effort-cost parameter, which do not alter the set of Nash equilibria, nor do they alter the predictions of dynamic adjustment theories based on imitation or best responses to others' decisions. As would be expected, however, increases in effort cost result in reduced effort levels. Average behavior in the final periods is consistent with a one-parameter stochastic generalization of the Nash equilibrium that is calculated by maximizing a "stochastic potential function." The noise parameter estimated from the initial two-person, minimum-effort games is used to predict behavior in subsequent experiments with three-person games, using both minimum and medium-effort payoff structures.coordination games, laboratory experiments, stochastic potential, logit equilibrium, bounded rationality, minimum effort game, median effort game

    Penalty-regulated dynamics and robust learning procedures in games

    Get PDF
    Starting from a heuristic learning scheme for N-person games, we derive a new class of continuous-time learning dynamics consisting of a replicator-like drift adjusted by a penalty term that renders the boundary of the game's strategy space repelling. These penalty-regulated dynamics are equivalent to players keeping an exponentially discounted aggregate of their on-going payoffs and then using a smooth best response to pick an action based on these performance scores. Owing to this inherent duality, the proposed dynamics satisfy a variant of the folk theorem of evolutionary game theory and they converge to (arbitrarily precise) approximations of Nash equilibria in potential games. Motivated by applications to traffic engineering, we exploit this duality further to design a discrete-time, payoff-based learning algorithm which retains these convergence properties and only requires players to observe their in-game payoffs: moreover, the algorithm remains robust in the presence of stochastic perturbations and observation errors, and it does not require any synchronization between players.Comment: 33 pages, 3 figure

    Riemannian game dynamics

    Get PDF
    We study a class of evolutionary game dynamics defined by balancing a gain determined by the game's payoffs against a cost of motion that captures the difficulty with which the population moves between states. Costs of motion are represented by a Riemannian metric, i.e., a state-dependent inner product on the set of population states. The replicator dynamics and the (Euclidean) projection dynamics are the archetypal examples of the class we study. Like these representative dynamics, all Riemannian game dynamics satisfy certain basic desiderata, including positive correlation and global convergence in potential games. Moreover, when the underlying Riemannian metric satisfies a Hessian integrability condition, the resulting dynamics preserve many further properties of the replicator and projection dynamics. We examine the close connections between Hessian game dynamics and reinforcement learning in normal form games, extending and elucidating a well-known link between the replicator dynamics and exponential reinforcement learning.Comment: 47 pages, 12 figures; added figures and further simplified the derivation of the dynamic

    On the robustness of learning in games with stochastically perturbed payoff observations

    Get PDF
    Motivated by the scarcity of accurate payoff feedback in practical applications of game theory, we examine a class of learning dynamics where players adjust their choices based on past payoff observations that are subject to noise and random disturbances. First, in the single-player case (corresponding to an agent trying to adapt to an arbitrarily changing environment), we show that the stochastic dynamics under study lead to no regret almost surely, irrespective of the noise level in the player's observations. In the multi-player case, we find that dominated strategies become extinct and we show that strict Nash equilibria are stochastically stable and attracting; conversely, if a state is stable or attracting with positive probability, then it is a Nash equilibrium. Finally, we provide an averaging principle for 2-player games, and we show that in zero-sum games with an interior equilibrium, time averages converge to Nash equilibrium for any noise level.Comment: 36 pages, 4 figure

    Mean-Field-Type Games in Engineering

    Full text link
    A mean-field-type game is a game in which the instantaneous payoffs and/or the state dynamics functions involve not only the state and the action profile but also the joint distributions of state-action pairs. This article presents some engineering applications of mean-field-type games including road traffic networks, multi-level building evacuation, millimeter wave wireless communications, distributed power networks, virus spread over networks, virtual machine resource management in cloud networks, synchronization of oscillators, energy-efficient buildings, online meeting and mobile crowdsensing.Comment: 84 pages, 24 figures, 183 references. to appear in AIMS 201

    The Logit Equilibrium: A Perspective on Intuitive Behavioral Anomalies

    Get PDF
    This paper considers a class of models in which rank-based payoffs are sensitive to small amounts of noise in decision making. Examples include auction, price-competition, coordination, and location games. Observed laboratory behavior in these games is often responsive to asymmetric costs associated with deviations from the Nash equilibrium. These payoff asymmetry effects are incorporated in an approach that introduces noisy behavior via probabilistic choice. In equilibrium, behavior is characterized by a probability distribution that satisfies a "rational expectations" consistency condition: the beliefs that determine player's expected payoffs match the decision distributions that arise from applying a logit probabilistic choice function to those expected payoffs. We prove existence of a unique, symmetric logit (quantal response) equilibrium and derive comparative statics results. The paper provides a unified perspective on many recent laboratory studies of games in which Nash equilibrium predictions are inconsistent with both intuition and experimental evidence.logit equilibrium, quantal response equilibrium, probabilistic choice, auctions.

    Acknowledgement Misspecification in Macroeconomic Theory

    Get PDF
    We explore methods for confronting model misspecification in macroeconomics. We construct dynamic equilibria in which private agents and policy makers recognize that models are approximations. We explore two generalizations of rational expectations equilibria. In one of these equilibria, decision makers use dynamic evolution equations that are imperfect statistical approximations, and in the other misspecification is impossible to detect even from infinite samples of time-series data. In the first of these equilibria, decision rules are tailored to be robust to the allowable statistical discrepancies. Using frequency domain methods, we show that robust decision makers treat model misspecification like time-series econometricians.

    Of Ants and Voters:Maximum Entropy Prediction of Agent-Based Models with Recruitment

    Get PDF
    Maximum entropy predictions are made for the Kirman ant model as well as the Abrams-Strogatz model of language competition, also known as the voter model. In both cases the maximum entropy methodology provides good predictions of the limiting distribution of states, as was already the case for the Schelling model of segregation. An additional contribution, the analysis of the models reveals the key role played by relative entropy and the model in controlling the time horizon of the prediction
    • …
    corecore