460 research outputs found

    Robust adaptive Metropolis algorithm with coerced acceptance rate

    Full text link
    The adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223-242] uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate. The adaptation rule is computationally simple adding no extra cost compared with the AM algorithm. The adaptation strategy can be seen as a multidimensional extension of the previously proposed method adapting the scale of the proposal distribution in order to attain a given acceptance rate. The empirical results show promising behaviour of the new algorithm in an example with Student target distribution having no finite second moment, where the AM covariance estimate is unstable. In the examples with finite second moments, the performance of the new approach seems to be competitive with the AM algorithm combined with scale adaptation.Comment: 21 pages, 3 figure

    Conditional convex orders and measurable martingale couplings

    Full text link
    Strassen's classical martingale coupling theorem states that two real-valued random variables are ordered in the convex (resp.\ increasing convex) stochastic order if and only if they admit a martingale (resp.\ submartingale) coupling. By analyzing topological properties of spaces of probability measures equipped with a Wasserstein metric and applying a measurable selection theorem, we prove a conditional version of this result for real-valued random variables conditioned on a random element taking values in a general measurable space. We also provide an analogue of the conditional martingale coupling theorem in the language of probability kernels and illustrate how this result can be applied in the analysis of pseudo-marginal Markov chain Monte Carlo algorithms. We also illustrate how our results imply the existence of a measurable minimiser in the context of martingale optimal transport.Comment: 21 page

    Markovian stochastic approximation with expanding projections

    Full text link
    Stochastic approximation is a framework unifying many random iterative algorithms occurring in a diverse range of applications. The stability of the process is often difficult to verify in practical applications and the process may even be unstable without additional stabilisation techniques. We study a stochastic approximation procedure with expanding projections similar to Andrad\'{o}ttir [Oper. Res. 43 (1995) 1037-1048]. We focus on Markovian noise and show the stability and convergence under general conditions. Our framework also incorporates the possibility to use a random step size sequence, which allows us to consider settings with a non-smooth family of Markov kernels. We apply the theory to stochastic approximation expectation maximisation with particle independent Metropolis-Hastings sampling.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ497 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    On the ergodicity of the adaptive Metropolis algorithm on unbounded domains

    Full text link
    This paper describes sufficient conditions to ensure the correct ergodicity of the Adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223--242] for target distributions with a noncompact support. The conditions ensuring a strong law of large numbers require that the tails of the target density decay super-exponentially and have regular contours. The result is based on the ergodicity of an auxiliary process that is sequentially constrained to feasible adaptation sets, independent estimates of the growth rate of the AM chain and the corresponding geometric drift constants. The ergodicity result of the constrained process is obtained through a modification of the approach due to Andrieu and Moulines [Ann. Appl. Probab. 16 (2006) 1462--1505].Comment: Published in at http://dx.doi.org/10.1214/10-AAP682 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Quantitative convergence rates for sub-geometric Markov chains

    Full text link
    We provide explicit expressions for the constants involved in the characterisation of ergodicity of sub-geometric Markov chains. The constants are determined in terms of those appearing in the assumed drift and one-step minorisation conditions. The result is fundamental for the study of some algorithms where uniform bounds for these constants are needed for a family of Markov kernels. Our result accommodates also some classes of inhomogeneous chains.Comment: 14 page

    On the stability and ergodicity of adaptive scaling Metropolis algorithms

    Get PDF
    The stability and ergodicity properties of two adaptive random walk Metropolis algorithms are considered. The both algorithms adjust the scaling of the proposal distribution continuously based on the observed acceptance probability. Unlike the previously proposed forms of the algorithms, the adapted scaling parameter is not constrained within a predefined compact interval. The first algorithm is based on scale adaptation only, while the second one incorporates also covariance adaptation. A strong law of large numbers is shown to hold assuming that the target density is smooth enough and has either compact support or super-exponentially decaying tails.Comment: 24 pages, 1 figure; major revisio
    • …
    corecore