44 research outputs found

    Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation

    Full text link
    Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n^2 iterations. Such random walks can sometimes be suppressed using ``overrelaxed'' variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.Comment: uuencoded compressed postscript (with instructions on decoding

    Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

    Full text link
    Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this article, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis-Hastings (MH) sampling schemes: We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference.Comment: 33 pages, 14 figure

    Improved Algorithms for Simulating Crystalline Membranes

    Get PDF
    The physics of crystalline membranes, i.e. fixed-connectivity surfaces embedded in three dimensions and with an extrinsic curvature term, is very rich and of great theoretical interest. To understand their behavior, numerical simulations are commonly used. Unfortunately, traditional Monte Carlo algorithms suffer from very long auto-correlations and critical slowing down in the more interesting phases of the model. In this paper we study the performance of improved Monte Carlo algorithms for simulating crystalline membrane, such as hybrid overrelaxation and unigrid methods, and compare their performance to the more traditional Metropolis algorithm. We find that although the overrelaxation algorithm does not reduce the critical slowing down, it gives an overall gain of a factor 15 over the Metropolis algorithm. The unigrid algorithm does, on the other hand, reduce the critical slowing down exponent to z apprx. 1.7.Comment: 14 pages, 1 eps-figur

    Monte Carlo Renormalization of 2d Simplicial Quantum Gravity Coupled to Gaussian Matter

    Get PDF
    We extend a recently proposed real-space renormalization group scheme for dynamical triangulations to situations where the lattice is coupled to continuous scalar fields. Using Monte Carlo simulations in combination with a linear, stochastic blocking scheme for the scalar fields we are able to determine the leading eigenvalues of the stability matrix with good accuracy both for c = 1 and c = 10 theories.Comment: 17 pages, 7 figure

    Zero Variance Markov Chain Monte Carlo for Bayesian Estimators

    Get PDF
    A general purpose variance reduction technique for Markov chain Monte Carlo (MCMC) estimators, based on the zero-variance principle introduced in the physics literature, is proposed to evaluate the expected value, of a function f with respect to a, possibly unnormalized, probability distribution . In this context, a control variate approach, generally used for Monte Carlo simulation, is exploited by replacing f with a dierent function, ~ f. The function ~ f is constructed so that its expectation, under , equals f , but its variance with respect to is much smaller. Theoretically, an optimal re-normalization f exists which may lead to zero variance; in practice, a suitable approximation for it must be investigated. In this paper, an ecient class of re-normalized ~ f is investigated, based on a polynomial parametrization. We nd that a low-degree polynomial (1st, 2nd or 3rd degree) can lead to dramatically huge variance reduction of the resulting zero-variance MCMC estimator. General formulas for the construction of the control variates in this context are given. These allow for an easy implementation of the method in very general settings regardless of the form of the target/posterior distribution (only dierentiability is required) and of the MCMC algorithm implemented (in particular, no reversibility is needed).Control variates, GARCH models, Logistic regression, Metropolis-Hastings algorithm, Variance reduction

    Multivariate Stochastic Volatility Models: Bayesian Estimation and Model Comparison

    Get PDF
    In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications which are natural extensions to certain existing models, one of which allows for time varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time varying correlations, heavytailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the most adequate specifications are those that allow for time varying correlation coefficients.Multivariate stochastic volatility; Granger causality in volatility; Heavy-tailed distributions; Time varying correlations; Factors; MCMC; DIC.
    corecore