21,472 research outputs found
Mixed Nash Equilibria in Concurrent Terminal-Reward Games
We study mixed-strategy Nash equilibria in multiplayer deterministic concurrent games played on graphs, with terminal-reward payoffs (that is, absorbing states with a value for each player). We show undecidability of the existence of a constrained Nash equilibrium (the constraint requiring that one player should have maximal payoff), with only three players and 0/1-rewards (i.e., reachability objectives). This has to be compared with the undecidability result by Ummels and Wojtczak for turn-based games which requires 14 players and general rewards. Our proof has various interesting consequences: (i) the undecidability of the existence of a Nash equilibrium with a constraint on the social welfare; (ii) the undecidability of the existence of an (unconstrained) Nash equilibrium in concurrent games with terminal-reward payoffs
Self-stabilizing uncoupled dynamics
Dynamics in a distributed system are self-stabilizing if they are guaranteed
to reach a stable state regardless of how the system is initialized. Game
dynamics are uncoupled if each player's behavior is independent of the other
players' preferences. Recognizing an equilibrium in this setting is a
distributed computational task. Self-stabilizing uncoupled dynamics, then, have
both resilience to arbitrary initial states and distribution of knowledge. We
study these dynamics by analyzing their behavior in a bounded-recall
synchronous environment. We determine, for every "size" of game, the minimum
number of periods of play that stochastic (randomized) players must recall in
order for uncoupled dynamics to be self-stabilizing. We also do this for the
special case when the game is guaranteed to have unique best replies. For
deterministic players, we demonstrate two self-stabilizing uncoupled protocols.
One applies to all games and uses three steps of recall. The other uses two
steps of recall and applies to games where each player has at least four
available actions. For uncoupled deterministic players, we prove that a single
step of recall is insufficient to achieve self-stabilization, regardless of the
number of available actions
Monotone methods for equilibrium selection under perfect foresight dynamics
This paper studies equilibrium selection in supermodular games
based on perfect foresight dynamics. A normal form game is played
repeatedly in a large society of rational agents. There are frictions:
opportunities to revise actions follow independent Poisson processes.
Each agent forms his belief about the future evolution of action distribution
in the society to take an action that maximizes his expected
discounted payo�. A perfect foresight path is de�ned to be a feasible
path of the action distribution along which every agent with a revision
opportunity takes a best response to this path itself. A Nash
equilibrium is said to be absorbing if there exists no perfect foresight
path escaping from a neighborhood of this equilibrium; a Nash equilibrium
is said to be globally accessible if for each initial distribution,
there exists a perfect foresight path converging to this equilibrium.
By exploiting the monotone structure of the dynamics, a unique Nash
equilibrium that is absorbing and globally accessible for any small degree
of friction is identi�ed for certain classes of supermodular games.
For games with monotone potentials, the selection of the monotone
potential maximizer is obtained. Complete characterizations of absorbing
equilibrium and globally accessible equilibrium are given for
binary supermodular games. An example demonstrates that unanimity
games may have multiple globally accessible equilibria for a small
friction
The Big Match in Small Space
In this paper we study how to play (stochastic) games optimally using little
space. We focus on repeated games with absorbing states, a type of two-player,
zero-sum concurrent mean-payoff games. The prototypical example of these games
is the well known Big Match of Gillete (1957). These games may not allow
optimal strategies but they always have {\epsilon}-optimal strategies. In this
paper we design {\epsilon}-optimal strategies for Player 1 in these games that
use only O(log log T ) space. Furthermore, we construct strategies for Player 1
that use space s(T), for an arbitrary small unbounded non-decreasing function
s, and which guarantee an {\epsilon}-optimal value for Player 1 in the limit
superior sense. The previously known strategies use space {\Omega}(logT) and it
was known that no strategy can use constant space if it is {\epsilon}-optimal
even in the limit superior sense. We also give a complementary lower bound.
Furthermore, we also show that no Markov strategy, even extended with finite
memory, can ensure value greater than 0 in the Big Match, answering a question
posed by Abraham Neyman
Dynamic club formation with coordination
We present a dynamic model of jurisdiction formation in a society of identical people. The process is described by a Markov chain that is defined by myopic optimization on the part of the players. We show that the process will converge to a Nash equilibrium club structure. Next, we allow for coordination between members of the same club,i.e. club members can form coalitions for one period and deviate jointly. We define a Nash club equilibrium (NCE) as a strategy configuration that is immune to such coalitional deviations. We show that, if one exists, this modified process will converge to a NCE configuration with probability one. Finally, we deal with the case where a NCE fails to exist due to indivisibility problems. When the population size is not an integer multiple of the optimal club size, there will be left over players who prevent the process from settling down. We define the concept of an approximate Nash club equilibrium (ANCE), which means that all but k players are playing a Nash club equilibrium, where k is defined by the minimal number of left over players. We show that the modified process converges to an ergodic set of states each of which is ANCE
- …