1,581 research outputs found

    Two Metropolis-Hastings Algorithms for Posterior Measures with Non-Gaussian Priors in Infinite Dimensions

    Get PDF
    We introduce two classes of Metropolis--Hastings algorithms for sampling target measures that are absolutely continuous with respect to non-Gaussian prior measures on infinite-dimensional Hilbert spaces. In particular, we focus on certain classes of prior measures for which prior-reversible proposal kernels of the autoregressive type can be designed. We then use these proposal kernels to design algorithms that satisfy detailed balance with respect to the target measures. Afterwards, we introduce a new class of prior measures, called the Bessel-K priors, as a generalization of the gamma distribution to measures in infinite dimensions. The Bessel-K priors interpolate between well-known priors such as the gamma distribution and Besov priors and can model sparse or compressible parameters. We present concrete instances of our algorithms for the Bessel-K priors in the context of numerical examples in density estimation, finite-dimensional denoising, and deconvolution on the circle

    Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors

    Full text link
    The computational complexity of MCMC methods for the exploration of complex probability measures is a challenging and important problem. A challenge of particular importance arises in Bayesian inverse problems where the target distribution may be supported on an infinite dimensional space. In practice this involves the approximation of measures defined on sequences of spaces of increasing dimension. Motivated by an elliptic inverse problem with non-Gaussian prior, we study the design of proposal chains for the Metropolis-Hastings algorithm with dimension independent performance. Dimension-independent bounds on the Monte-Carlo error of MCMC sampling for Gaussian prior measures have already been established. In this paper we provide a simple recipe to obtain these bounds for non-Gaussian prior measures. To illustrate the theory we consider an elliptic inverse problem arising in groundwater flow. We explicitly construct an efficient Metropolis-Hastings proposal based on local proposals, and we provide numerical evidence which supports the theory.Comment: 26 pages, 7 figure

    MCMC methods for functions modifying old algorithms to make\ud them faster

    Get PDF
    Many problems arising in applications result in the need\ud to probe a probability distribution for functions. Examples include Bayesian nonparametric statistics and conditioned diffusion processes. Standard MCMC algorithms typically become arbitrarily slow under the mesh refinement dictated by nonparametric description of the unknown function. We describe an approach to modifying a whole range of MCMC methods which ensures that their speed of convergence is robust under mesh refinement. In the applications of interest the data is often sparse and the prior specification is an essential part of the overall modeling strategy. The algorithmic approach that we describe is applicable whenever the desired probability measure has density with respect to a Gaussian process or Gaussian random field prior, and to some useful non-Gaussian priors constructed through random truncation. Applications are shown in density estimation, data assimilation in fluid mechanics, subsurface geophysics and image registration. The key design principle is to formulate the MCMC method for functions. This leads to algorithms which can be implemented via minor modification of existing algorithms, yet which show enormous speed-up on a wide range of applied problems

    Analysis of the Gibbs sampler for hierarchical inverse problems

    Get PDF
    Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized resulting in a problem in RN\mathbb{R}^N, with an understanding that refining the discretization, that is increasing NN, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyper-parameters in such a way that they are interpretable in the continuum limit NN \to \infty and so that their values may be compared between different discretization levels; (ii) understanding the efficiency of algorithms for probing the posterior distribution, as a function of large N.N. Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modelling framework based on a Gaussian prior for the unknown field and an inverse-gamma prior for a hyper-parameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. Subscribing to the dogma that one should think infinite-dimensionally before implementing in finite dimensions, we present function space intuition and provide rigorous theory showing that as NN increases, the component of the Gibbs sampler for sampling the amplitude of the prior variance becomes increasingly slower. We discuss a reparametrization of the prior variance that is robust with respect to the increase in dimension; we give numerical experiments which exhibit that our reparametrization prevents the slowing down. Our intuition on the behaviour of the prior hyper-parameter, with and without reparametrization, is sufficiently general to include a broad class of nonlinear inverse problems as well as other families of hyper-priors.Comment: to appear, SIAM/ASA Journal on Uncertainty Quantificatio

    Efficient MCMC and posterior consistency for Bayesian inverse problems

    Get PDF
    Many mathematical models used in science and technology often contain parameters that are not known a priori. In order to match a model to a physical phenomenon, the parameters have to be adapted on the basis of the available data. One of the most important statistical concepts applied to inverse problems is the Bayesian approach which models the a priori and a posteriori uncertainty through probability distributions, called the prior and posterior, respectively. However, computational methods such as Markov Chain Monte Carlo (MCMC) have to be used because these probability measures are only given implicitly. This thesis deals with two major tasks in the area of Bayesian inverse problems: the improvement of the computational methods, in particular, different kinds of MCMC algorithms, and the properties of the Bayesian approach to inverse problems such as posterior consistency. In inverse problems, the unknown parameters are often functions and therefore elements of infinite dimensional spaces. For this reason, we have to discretise the underlying problem in order to apply MCMC methods to it. Finer discretisations lead to a higher dimensional state space and usually to a slower convergence rate of the Markov chain. We study these convergence rates rigorously and show how they deteriorate for standard methods. Moreover, we prove that slightly modified methods exhibit a dimension independent performance constituting one of the first dimension independent convergence results for locally moving MCMC algorithms. The second part of the thesis concerns numerical and analytical investigations of the posterior based on artificially generated data corresponding to a true set of parameters. In particular, we study the behaviour of the posterior as the amount of data increases or the noise in the data decreases. Posterior consistency describes the phenomenon that a sequence of posteriors concentrates around the truth. In this thesis, we present one of the first posterior consistency results for non-linear infinite dimensional inverse problems. We also study a multiscale elliptic inverse problem in detail. In particular, we show that it is not posterior consistent but the posterior concentrates around a manifold

    A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations

    Get PDF
    Many scientific and engineering problems require to perform Bayesian inferences in function spaces, in which the unknowns are of infinite dimension. In such problems, choosing an appropriate prior distribution is an important task. In particular we consider problems where the function to infer is subject to sharp jumps which render the commonly used Gaussian measures unsuitable. On the other hand, the so-called total variation (TV) prior can only be defined in a finite dimensional setting, and does not lead to a well-defined posterior measure in function spaces. In this work we present a TV-Gaussian (TG) prior to address such problems, where the TV term is used to detect sharp jumps of the function, and the Gaussian distribution is used as a reference measure so that it results in a well-defined posterior measure in the function space. We also present an efficient Markov Chain Monte Carlo (MCMC) algorithm to draw samples from the posterior distribution of the TG prior. With numerical examples we demonstrate the performance of the TG prior and the efficiency of the proposed MCMC algorithm
    corecore