58 research outputs found

    Robust adaptive Metropolis algorithm with coerced acceptance rate

    Full text link
    The adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223-242] uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate. The adaptation rule is computationally simple adding no extra cost compared with the AM algorithm. The adaptation strategy can be seen as a multidimensional extension of the previously proposed method adapting the scale of the proposal distribution in order to attain a given acceptance rate. The empirical results show promising behaviour of the new algorithm in an example with Student target distribution having no finite second moment, where the AM covariance estimate is unstable. In the examples with finite second moments, the performance of the new approach seems to be competitive with the AM algorithm combined with scale adaptation.Comment: 21 pages, 3 figure

    Conditional convex orders and measurable martingale couplings

    Full text link
    Strassen's classical martingale coupling theorem states that two real-valued random variables are ordered in the convex (resp.\ increasing convex) stochastic order if and only if they admit a martingale (resp.\ submartingale) coupling. By analyzing topological properties of spaces of probability measures equipped with a Wasserstein metric and applying a measurable selection theorem, we prove a conditional version of this result for real-valued random variables conditioned on a random element taking values in a general measurable space. We also provide an analogue of the conditional martingale coupling theorem in the language of probability kernels and illustrate how this result can be applied in the analysis of pseudo-marginal Markov chain Monte Carlo algorithms. We also illustrate how our results imply the existence of a measurable minimiser in the context of martingale optimal transport.Comment: 21 page

    Markovian stochastic approximation with expanding projections

    Full text link
    Stochastic approximation is a framework unifying many random iterative algorithms occurring in a diverse range of applications. The stability of the process is often difficult to verify in practical applications and the process may even be unstable without additional stabilisation techniques. We study a stochastic approximation procedure with expanding projections similar to Andrad\'{o}ttir [Oper. Res. 43 (1995) 1037-1048]. We focus on Markovian noise and show the stability and convergence under general conditions. Our framework also incorporates the possibility to use a random step size sequence, which allows us to consider settings with a non-smooth family of Markov kernels. We apply the theory to stochastic approximation expectation maximisation with particle independent Metropolis-Hastings sampling.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ497 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    On the ergodicity of the adaptive Metropolis algorithm on unbounded domains

    Full text link
    This paper describes sufficient conditions to ensure the correct ergodicity of the Adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223--242] for target distributions with a noncompact support. The conditions ensuring a strong law of large numbers require that the tails of the target density decay super-exponentially and have regular contours. The result is based on the ergodicity of an auxiliary process that is sequentially constrained to feasible adaptation sets, independent estimates of the growth rate of the AM chain and the corresponding geometric drift constants. The ergodicity result of the constrained process is obtained through a modification of the approach due to Andrieu and Moulines [Ann. Appl. Probab. 16 (2006) 1462--1505].Comment: Published in at http://dx.doi.org/10.1214/10-AAP682 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Quantitative convergence rates for sub-geometric Markov chains

    Full text link
    We provide explicit expressions for the constants involved in the characterisation of ergodicity of sub-geometric Markov chains. The constants are determined in terms of those appearing in the assumed drift and one-step minorisation conditions. The result is fundamental for the study of some algorithms where uniform bounds for these constants are needed for a family of Markov kernels. Our result accommodates also some classes of inhomogeneous chains.Comment: 14 page

    Convergence properties of pseudo-marginal markov chain monte carlo algorithms

    Get PDF
    We study convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms (Andrieu and Roberts [Ann. Statist. 37 (2009) 697-725]). We find that the asymptotic variance of the pseudo-marginal algorithm is always at least as large as that of the marginal algorithm. We show that if the marginal chain admits a (right) spectral gap and the weights (normalised estimates of the target density) are uniformly bounded, then the pseudo-marginal chain has a spectral gap. In many cases, a similar result holds for the absolute spectral gap, which is equivalent to geometric ergodicity. We consider also unbounded weight distributions and recover polynomial convergence rates in more specific cases, when the marginal algorithm is uniformly ergodic or an independent Metropolis-Hastings or a random-walk Metropolis targeting a super-exponential density with regular contours. Our results on geometric and polynomial convergence rates imply central limit theorems. We also prove that under general conditions, the asymptotic variance of the pseudo-marginal algorithm converges to the asymptotic variance of the marginal algorithm if the accuracy of the estimators is increased.Comment: Published at http://dx.doi.org/10.1214/14-AAP1022 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Conditional particle filters with diffuse initial distributions

    Full text link
    Conditional particle filters (CPFs) are powerful smoothing algorithms for general nonlinear/non-Gaussian hidden Markov models. However, CPFs can be inefficient or difficult to apply with diffuse initial distributions, which are common in statistical applications. We propose a simple but generally applicable auxiliary variable method, which can be used together with the CPF in order to perform efficient inference with diffuse initial distributions. The method only requires simulatable Markov transitions that are reversible with respect to the initial distribution, which can be improper. We focus in particular on random-walk type transitions which are reversible with respect to a uniform initial distribution (on some domain), and autoregressive kernels for Gaussian initial distributions. We propose to use on-line adaptations within the methods. In the case of random-walk transition, our adaptations use the estimated covariance and acceptance rate adaptation, and we detail their theoretical validity. We tested our methods with a linear-Gaussian random-walk model, a stochastic volatility model, and a stochastic epidemic compartment model with time-varying transmission rate. The experimental findings demonstrate that our method works reliably with little user specification, and can be substantially better mixing than a direct particle Gibbs algorithm that treats initial states as parameters.Comment: 21 pages, 17 figure
    • …
    corecore