548 research outputs found

    The Logit-Response Dynamics

    Get PDF
    We develop a characterization of stochastically stable states for the logit-response learning dynamics in games, with arbitrary specification of revision opportunities. The result allows us to show convergence to the set of Nash equilibria in the class of best-response potential games and the failure of the dynamics to select potential maximizers beyond the class of exact potential games. We also study to which extent equilibrium selection is robust to the specification of revision opportunities. Our techniques can be extended and applied to a wide class of learning dynamics in games.Learning in games, logit-response dynamics, best-response potential games

    Orders of limits for stationary distributions, stochastic dominance, and stochastic stability

    Get PDF
    A population of agents recurrently plays a two-strategy population game. When an agent receives a revision opportunity, he chooses a new strategy using a noisy best response rule that satisfies mild regularity conditions; best response with mutations, logit choice, and probit choice are all permitted. We study the long run behavior of the resulting Markov process when the noise level η\eta is small and the population size NN is large. We obtain a precise characterization of the asymptotics of the stationary distributions ΌN,η\mu^{N,\eta} as η\eta approaches zero and NN approaches infinity, and we establish that these asymptotics are the same for either order of limits and for all simultaneous limits. In general, different noisy best response rules can generate different stochastically stable states. To obtain a robust selection result, we introduce a refinement of risk dominance called \emph{stochastic dominance}, and we prove that coordination on a given strategy is stochastically stable under every noisy best response rule if and only if that strategy is stochastically dominant.Evolutionary game theory, stochastic stability, equilibrium selection

    Potential games in volatile environments

    Get PDF
    This papers studies the co-evolution of networks and play in the context of finite population potential games. Action revision, link creation and link destruction are combined in a continuous-time Markov process. I derive the unique invariant distribution of this process in closed form, as well as the marginal distribution over action profiles and the conditional distribution over networks. It is shown that the equilibrium interaction topology is an inhomogeneous random graph. Furthermore, a characterization of the set of stochastically stable states is provided, generalizing existing results to models with endogenous interaction structures.

    Agglomeration under Forward-Looking Expectations: Potentials and Global Stability

    Get PDF
    This paper considers a class of migration dynamics with forward-looking agents in a multi-country solvable variant of the core-periphery model of Krugman (Journal of Political Economy 99 (1991)). We find that, under a symmetric externality assumption, our static model admits a potential function, which allows us to identify a stationary state that is uniquely absorbing and globally accessible under the perfect foresight dynamics whenever the degree of friction in relocation decisions is sufficiently small. In particular, when trade barriers are low enough, full agglomeration in the country with the highest barrier is the unique stable state for small frictions. New aspects in trade and tax policy that arise due to forward-looking behavior are discussed.economic geography; agglomeration; perfect foresight dynamics; history versus expectations; stability; potential game; equilibrium selection

    Distributed stochastic optimization via matrix exponential learning

    Get PDF
    In this paper, we investigate a distributed learning scheme for a broad class of stochastic optimization problems and games that arise in signal processing and wireless communications. The proposed algorithm relies on the method of matrix exponential learning (MXL) and only requires locally computable gradient observations that are possibly imperfect and/or obsolete. To analyze it, we introduce the notion of a stable Nash equilibrium and we show that the algorithm is globally convergent to such equilibria - or locally convergent when an equilibrium is only locally stable. We also derive an explicit linear bound for the algorithm's convergence speed, which remains valid under measurement errors and uncertainty of arbitrarily high variance. To validate our theoretical analysis, we test the algorithm in realistic multi-carrier/multiple-antenna wireless scenarios where several users seek to maximize their energy efficiency. Our results show that learning allows users to attain a net increase between 100% and 500% in energy efficiency, even under very high uncertainty.Comment: 31 pages, 3 figure
    • 

    corecore