2 research outputs found
An Algorithm for Computing Stochastically Stable Distributions with Applications to Multiagent Learning in Repeated Games
One of the proposed solutions to the equilibrium selection problem for agents
learning in repeated games is obtained via the notion of stochastic stability.
Learning algorithms are perturbed so that the Markov chain underlying the
learning dynamics is necessarily irreducible and yields a unique stable
distribution. The stochastically stable distribution is the limit of these
stable distributions as the perturbation rate tends to zero. We present the
first exact algorithm for computing the stochastically stable distribution of a
Markov chain. We use our algorithm to predict the long-term dynamics of simple
learning algorithms in sample repeated games.Comment: Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005
Multi-scale metastable dynamics and the asymptotic stationary distribution of perturbed Markov chains
We consider a simple but important class of metastable discrete time Markov
chains, which we call perturbed Markov chains. Basically, we assume that the
transition matrices depend on a parameter , and converge as
. We further assume that the chain is irreducible for
but may have several essential communicating classes when
. This leads to metastable behavior, possibly on multiple time
scales. For each of the relevant time scales, we derive two effective chains.
The first one describes the (possibly irreversible) metastable dynamics, while
the second one is reversible and describes metastable escape probabilities.
Closed probabilistic expressions are given for the asymptotic transition
probabilities of these chains, but we also show how to compute them in a fast
and numerically stable way. As a consequence, we obtain efficient algorithms
for computing the committor function and the limiting stationary distribution.Comment: 26 pages, 1 figur