3,123 research outputs found
Perturbation theory for Markov chains via Wasserstein distance
Perturbation theory for Markov chains addresses the question how small
differences in the transitions of Markov chains are reflected in differences
between their distributions. We prove powerful and flexible bounds on the
distance of the th step distributions of two Markov chains when one of them
satisfies a Wasserstein ergodicity condition. Our work is motivated by the
recent interest in approximate Markov chain Monte Carlo (MCMC) methods in the
analysis of big data sets. By using an approach based on Lyapunov functions, we
provide estimates for geometrically ergodic Markov chains under weak
assumptions. In an autoregressive model, our bounds cannot be improved in
general. We illustrate our theory by showing quantitative estimates for
approximate versions of two prominent MCMC algorithms, the Metropolis-Hastings
and stochastic Langevin algorithms.Comment: 31 pages, accepted at Bernoulli Journa
Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels
Monte Carlo algorithms often aim to draw from a distribution by
simulating a Markov chain with transition kernel such that is
invariant under . However, there are many situations for which it is
impractical or impossible to draw from the transition kernel . For instance,
this is the case with massive datasets, where is it prohibitively expensive to
calculate the likelihood and is also the case for intractable likelihood models
arising from, for example, Gibbs random fields, such as those found in spatial
statistics and network analysis. A natural approach in these cases is to
replace by an approximation . Using theory from the stability of
Markov chains we explore a variety of situations where it is possible to
quantify how 'close' the chain given by the transition kernel is to
the chain given by . We apply these results to several examples from spatial
statistics and network analysis.Comment: This version: results extended to non-uniformly ergodic Markov chain
Convergence in distribution for filtering processes associated to Hidden Markov Models with densities
Consider a filtering process associated to a hidden Markov model with
densities for which both the state space and the observation space are
complete, separable, metric spaces. If the underlying, hidden Markov chain is
strongly ergodic and the filtering process fulfills a certain coupling
condition we prove that, in the limit, the distribution of the filtering
process is independent of the initial distribution of the hidden Markov chain.
If furthermore the hidden Markov chain is uniformly ergodic, then we prove that
the filtering process converges in distribution.Comment: 54 pages revision. Rewritten introduction. Theorem 12.1 sharper than
Theorem 16.1 (v1). Proofs and results reorganised. Example 18.3 (v1) exclude
- …