93,340 research outputs found

    Equilibrium Selection Through Incomplete Information in Coordination Games: An Experimental Study

    Get PDF
    We perform an experiment on a pure coordination game with uncertainty about the payoffs. Our game is closely related to models that have been used in many macroeconomic and financial applications to solve problems of equilibrium indeterminacy. In our experiment, each subject receives a noisy signal about the true payoffs. This game (inspired by the “global” games of Carlsson and van Damme, Econometrica, 61, 989–1018, 1993) has a unique strategy profile that survives the iterative deletion of strictly dominated strategies (thus a unique Nash equilibrium). The equilibrium outcome coincides, on average, with the risk-dominant equilibrium outcome of the underlying coordination game. In the baseline game, the behavior of the subjects converges to the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that this behavior can be explained by learning. To test this hypothesis, we use a different game with incomplete information, related to a complete information game where learning and prior experiments suggest a different behavior. Indeed, in the second treatment, the behavior did not converge to equilibrium within 50 periods in some of the sessions.We also run both games under complete information. The results are sufficiently similar between complete and incomplete information to suggest that risk-dominance is also an important part of the explanation.Publicad

    Convergence to Equilibrium of Logit Dynamics for Strategic Games

    Full text link
    We present the first general bounds on the mixing time of the Markov chain associated to the logit dynamics for wide classes of strategic games. The logit dynamics with inverse noise beta describes the behavior of a complex system whose individual components act selfishly and keep responding according to some partial ("noisy") knowledge of the system, where the capacity of the agent to know the system and compute her best move is measured by the inverse of the parameter beta. In particular, we prove nearly tight bounds for potential games and games with dominant strategies. Our results show that, for potential games, the mixing time is upper and lower bounded by an exponential in the inverse of the noise and in the maximum potential difference. Instead, for games with dominant strategies, the mixing time cannot grow arbitrarily with the inverse of the noise. Finally, we refine our analysis for a subclass of potential games called graphical coordination games, a class of games that have been previously studied in Physics and, more recently, in Computer Science in the context of diffusion of new technologies. We give evidence that the mixing time of the logit dynamics for these games strongly depends on the structure of the underlying graph. We prove that the mixing time of the logit dynamics for these games can be upper bounded by a function that is exponential in the cutwidth of the underlying graph and in the inverse of noise. Moreover, we consider two specific and popular network topologies, the clique and the ring. For games played on a clique we prove an almost matching lower bound on the mixing time of the logit dynamics that is exponential in the inverse of the noise and in the maximum potential difference, while for games played on a ring we prove that the time of convergence of the logit dynamics to its stationary distribution is significantly shorter

    Evolutionary Poisson Games for Controlling Large Population Behaviors

    Full text link
    Emerging applications in engineering such as crowd-sourcing and (mis)information propagation involve a large population of heterogeneous users or agents in a complex network who strategically make dynamic decisions. In this work, we establish an evolutionary Poisson game framework to capture the random, dynamic and heterogeneous interactions of agents in a holistic fashion, and design mechanisms to control their behaviors to achieve a system-wide objective. We use the antivirus protection challenge in cyber security to motivate the framework, where each user in the network can choose whether or not to adopt the software. We introduce the notion of evolutionary Poisson stable equilibrium for the game, and show its existence and uniqueness. Online algorithms are developed using the techniques of stochastic approximation coupled with the population dynamics, and they are shown to converge to the optimal solution of the controller problem. Numerical examples are used to illustrate and corroborate our results

    Silicon Burning I: Neutronization and the Physics of Quasi-Equilibrium

    Full text link
    As the ultimate stage of stellar nucleosynthesis, and the source of the iron peak nuclei, silicon burning is important to our understanding of the evolution of massive stars and supernovae. Our reexamination of silicon burning, using results gleaned from simulation work done with a large nuclear network (299 nuclei and more than 3000 reactions) and from independent calculations of equilibrium abundance distributions, offers new insights into the quasi-equilibrium mechanism and the approach to nuclear statistical equilibrium. We find that the degree to which the matter has been neutronized is of great importance, not only to the final products but also to the rate of energy generation and the membership of the quasi-equilibrium groups. A small increase in the global neutronization results in much larger free neutron fluences, increasing the abundances of more neutron-rich nuclei. As a result, incomplete silicon burning results in neutron richness among the isotopes of the iron peak much larger than the global neutronization would indicate. Finally, we briefly discuss the limitations and pitfalls of models for silicon burning currently employed within hydrodynamic models. In a forthcoming paper we will present a new approximation to the full nuclear network which preserves the most important features of the large nuclear network calculations at a significant improvement in computational speed. Such improved methods are ideally suited for hydrodynamic calculations which involve the production of iron peak nuclei, where the larger network calculation proves unmanageable.Comment: 44 pages of TeX with 25 Postscript figures, uses psfig.sty, To appear in the The Astrophysical Journal, April 1 1996. Complete PostScript version of the paper is also available from http://tycho.as.utexas.edu/~raph/Publications.htm

    A survey of random processes with reinforcement

    Full text link
    The models surveyed include generalized P\'{o}lya urns, reinforced random walks, interacting urn models, and continuous reinforced processes. Emphasis is on methods and results, with sketches provided of some proofs. Applications are discussed in statistics, biology, economics and a number of other areas.Comment: Published at http://dx.doi.org/10.1214/07-PS094 in the Probability Surveys (http://www.i-journals.org/ps/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Contagion through learning

    Get PDF
    We study learning in a large class of complete information normal form games. Players continually face new strategic situations and must form beliefs by extrapolation from similar past situations. We characterize the long-run outcomes of learning in terms of iterated dominance in a related incomplete information game with subjective priors. The use of extrapolations in learning may generate contagion of actions across games even if players learn only from games with payoffs very close to the current ones. Contagion may lead to unique long-run outcomes where multiplicity would occur if players learned through repeatedly playing the same game. The process of contagion through learning is formally related to contagion in global games, although the outcomes generally differ.Similarity, learning, contagion, case-based reasoning, global games

    Coordination and Social Learning

    Get PDF
    This paper studies the interaction between coordination and social learning in a dynamic regime change game. Social learning provides public information to which players overreact due to the coordination motive. So coordination affects the aggregation of private signals through players' optimal choices. Such endogenous provision of public information results in inefficient herds with positive probability, even though private signals have an unbounded likelihood ratio property. Therefore, social learning is a source of coordination failure. An extension shows that if players could individually learn, inefficient herding disappears, and thus coordination is successful almost surely. This paper also demonstrates that along the same history, the belief convergence differs in different equilibria. Finally, social learning can lead to higher social welfare when the fundamentals are bad.Coordination, social learning, inefficient herding, dynamic global game, common belief

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)
    • …
    corecore