2,492 research outputs found

    Bayesian inverse problems with partial observations

    Get PDF
    We study a nonparametric Bayesian approach to linear inverse problems under discrete observations. We use the discrete Fourier transform to convert our model into a truncated Gaussian sequence model, that is closely related to the classical Gaussian sequence model. Upon placing the truncated series prior on the unknown parameter, we show that as the number of observations n→∞,n\rightarrow\infty, the corresponding posterior distribution contracts around the true parameter at a rate depending on the smoothness of the true parameter and the prior, and the ill-posedness degree of the problem. Correct combinations of these values lead to optimal posterior contraction rates (up to logarithmic factors). Similarly, the frequentist coverage of Bayesian credible sets is shown to be dependent on a combination of smoothness of the true parameter and the prior, and the ill-posedness of the problem. Oversmoothing priors lead to zero coverage, while undersmoothing priors produce highly conservative results. Finally, we illustrate our theoretical results by numerical examples.Comment: 22 pages, 2 figure

    Bayesian inverse problems with Gaussian priors

    Get PDF
    The posterior distribution in a nonparametric inverse problem is shown to contract to the true parameter at a rate that depends on the smoothness of the parameter, and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the minimax rate. The frequentist coverage of credible sets is shown to depend on the combination of prior and true parameter, with smoother priors leading to zero coverage and rougher priors to conservative coverage. In the latter case credible sets are of the correct order of magnitude. The results are numerically illustrated by the problem of recovering a function from observation of a noisy version of its primitive.Comment: Published in at http://dx.doi.org/10.1214/11-AOS920 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian linear inverse problems in regularity scales

    Full text link
    We obtain rates of contraction of posterior distributions in inverse problems defined by scales of smoothness classes. We derive abstract results for general priors, with contraction rates determined by Galerkin approximation. The rate depends on the amount of prior concentration near the true function and the prior mass of functions with inferior Galerkin approximation. We apply the general result to non-conjugate series priors, showing that these priors give near optimal and adaptive recovery in some generality, Gaussian priors, and mixtures of Gaussian priors, where the latter are also shown to be near optimal and adaptive. The proofs are based on general testing and approximation arguments, without explicit calculations on the posterior distribution. We are thus not restricted to priors based on the singular value decomposition of the operator. We illustrate the results with examples of inverse problems resulting from differential equations.Comment: 34 page

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Regularizing priors for linear inverse problems

    Get PDF
    We consider statistical linear inverse problems in Hilbert spaces of the type ˆ Y = Kx + U where we want to estimate the function x from indirect noisy functional observations ˆY . In several applications the operator K has an inverse that is not continuous on the whole space of reference; this phenomenon is known as ill-posedness of the inverse problem. We use a Bayesian approach and a conjugate-Gaussian model. For a very general specification of the probability model the posterior distribution of x is known to be inconsistent in a frequentist sense. Our first contribution consists in constructing a class of Gaussian prior distributions on x that are shrinking with the measurement error U and we show that, under mild conditions, the corresponding posterior distribution is consistent in a frequentist sense and converges at the optimal rate of contraction. Then, a class ^ of posterior mean estimators for x is given. We propose an empirical Bayes procedure for selecting an estimator in this class that mimics the posterior mean that has the smallest risk on the true x.
    • …
    corecore