194,482 research outputs found
Interaction on Hypergraphs
Interaction on hypergraphs generalizes interaction on graphs, also known as pairwise local interaction. For games played on a hypergraph which are supermodular potential games, logit-perturbed best-response dynamics are studied. We find that the associated stochastically stable states form a sublattice of the lattice of Nash equilibria and derive comparative statics results for the smallest and the largest stochastically stable state. In the special case of networking games, we obtain comparative statics results with respect to investment costs, for Nash equilibria of supermodular games as well as for Nash equilibria of submodular games.
An experimental study of costly coordination
This paper reports data for coordination game experiments with random matching. The experimental design is based on changes in an effort-cost parameter, which do not alter the set of Nash equilibria nor do they alter the predictions of adjustment theories based on imitation or best response dynamics. As expected, however, increasing the effort cost lowers effort levels. Maximization of a stochastic potential function, a concept that generalizes risk dominance to continuous games, predicts this reduction in efforts. An error parameter estimated from initial two-person, minimum-effort games is used to predict behavior in other three-person coordination games
Dynamics in Near-Potential Games
Except for special classes of games, there is no systematic framework for
analyzing the dynamical properties of multi-agent strategic interactions.
Potential games are one such special but restrictive class of games that allow
for tractable dynamic analysis. Intuitively, games that are "close" to a
potential game should share similar properties. In this paper, we formalize and
develop this idea by quantifying to what extent the dynamic features of
potential games extend to "near-potential" games. We study convergence of three
commonly studied classes of adaptive dynamics: discrete-time better/best
response, logit response, and discrete-time fictitious play dynamics. For
better/best response dynamics, we focus on the evolution of the sequence of
pure strategy profiles and show that this sequence converges to a (pure)
approximate equilibrium set, whose size is a function of the "distance" from a
close potential game. We then study logit response dynamics and provide a
characterization of the stationary distribution of this update rule in terms of
the distance of the game from a close potential game and the corresponding
potential function. We further show that the stochastically stable strategy
profiles are pure approximate equilibria. Finally, we turn attention to
fictitious play, and establish that the sequence of empirical frequencies of
player actions converges to a neighborhood of (mixed) equilibria of the game,
where the size of the neighborhood increases with distance of the game to a
potential game. Thus, our results suggest that games that are close to a
potential game inherit the dynamical properties of potential games. Since a
close potential game to a given game can be found by solving a convex
optimization problem, our approach also provides a systematic framework for
studying convergence behavior of adaptive learning dynamics in arbitrary finite
strategic form games.Comment: 42 pages, 8 figure
Dynamics in near-potential games
We consider discrete-time learning dynamics in finite strategic form games, and show that games that are close to a potential game inherit many of the dynamical properties of potential games. We first study the evolution of the sequence of pure strategy profiles under better/best response dynamics. We show that this sequence converges to a (pure) approximate equilibrium set whose size is a function of the “distance” to a given nearby potential game. We then focus on logit response dynamics, and provide a characterization of the limiting outcome in terms of the distance of the game to a given potential game and the corresponding potential function. Finally, we turn attention to fictitious play, and establish that in near-potential games the sequence of empirical frequencies of player actions converges to a neighborhood of (mixed) equilibria, where the size of the neighborhood increases according to the distance to the set of potential games
Best-Response Dynamics, Playing Sequences, and Convergence to Equilibrium in Random Games
We analyze the performance of the best-response dynamic across all
normal-form games using a random games approach. The playing sequence -- the
order in which players update their actions -- is essentially irrelevant in
determining whether the dynamic converges to a Nash equilibrium in certain
classes of games (e.g. in potential games) but, when evaluated across all
possible games, convergence to equilibrium depends on the playing sequence in
an extreme way. Our main asymptotic result shows that the best-response dynamic
converges to a pure Nash equilibrium in a vanishingly small fraction of all
(large) games when players take turns according to a fixed cyclic order. By
contrast, when the playing sequence is random, the dynamic converges to a pure
Nash equilibrium if one exists in almost all (large) games.Comment: JEL codes: C62, C72, C73, D83 Keywords: Best-response dynamics,
equilibrium convergence, random games, learning models in game
Interaction on hypergraphs
Interaction on hypergraphs generalizes interaction on graphs, also known as pairwise local interaction. For games played on a hypergraph which are supermodular potential games, logit-perturbed best-response dynamics are studied. We find that the associated stochastically stable states form a sublattice of the lattice of Nash equilibria and derive comparative statics results for the smallest and the largest stochastically stable state. In the special case of networking games, we obtain comparative statics results with respect to investment costs, for Nash equilibria of supermodular games as well as for Nash equilibria of submodular games
Penalty-regulated dynamics and robust learning procedures in games
Starting from a heuristic learning scheme for N-person games, we derive a new
class of continuous-time learning dynamics consisting of a replicator-like
drift adjusted by a penalty term that renders the boundary of the game's
strategy space repelling. These penalty-regulated dynamics are equivalent to
players keeping an exponentially discounted aggregate of their on-going payoffs
and then using a smooth best response to pick an action based on these
performance scores. Owing to this inherent duality, the proposed dynamics
satisfy a variant of the folk theorem of evolutionary game theory and they
converge to (arbitrarily precise) approximations of Nash equilibria in
potential games. Motivated by applications to traffic engineering, we exploit
this duality further to design a discrete-time, payoff-based learning algorithm
which retains these convergence properties and only requires players to observe
their in-game payoffs: moreover, the algorithm remains robust in the presence
of stochastic perturbations and observation errors, and it does not require any
synchronization between players.Comment: 33 pages, 3 figure
- …