655,850 research outputs found
Regularizing priors for linear inverse problems
We consider statistical linear inverse problems in Hilbert spaces of the type ˆ Y = Kx + U where we want to estimate the function x from indirect noisy functional observations ˆY . In several applications the operator K has an inverse that is not continuous on the whole space of reference; this phenomenon is known as ill-posedness of the inverse problem. We use a Bayesian approach and a conjugate-Gaussian model. For a very general specification of the probability model the posterior distribution of x is known to be inconsistent in a frequentist sense. Our first contribution consists in constructing a class of Gaussian prior distributions on x that are shrinking with the measurement error U and we show that, under mild conditions, the corresponding posterior distribution is consistent in a frequentist sense and converges at the optimal rate of contraction. Then, a class ^ of posterior mean estimators for x is given. We propose an empirical Bayes procedure for selecting an estimator in this class that mimics the posterior mean that has the smallest risk on the true x.
Bayesian linear inverse problems in regularity scales
We obtain rates of contraction of posterior distributions in inverse problems
defined by scales of smoothness classes. We derive abstract results for general
priors, with contraction rates determined by Galerkin approximation. The rate
depends on the amount of prior concentration near the true function and the
prior mass of functions with inferior Galerkin approximation. We apply the
general result to non-conjugate series priors, showing that these priors give
near optimal and adaptive recovery in some generality, Gaussian priors, and
mixtures of Gaussian priors, where the latter are also shown to be near optimal
and adaptive. The proofs are based on general testing and approximation
arguments, without explicit calculations on the posterior distribution. We are
thus not restricted to priors based on the singular value decomposition of the
operator. We illustrate the results with examples of inverse problems resulting
from differential equations.Comment: 34 page
Adaptive complexity regularization for linear inverse problems
We tackle the problem of building adaptive estimation procedures for
ill-posed inverse problems. For general regularization methods depending on
tuning parameters, we construct a penalized method that selects the optimal
smoothing sequence without prior knowledge of the regularity of the function to
be estimated. We provide for such estimators oracle inequalities and optimal
rates of convergence. This penalized approach is applied to Tikhonov
regularization and to regularization by projection.Comment: Published in at http://dx.doi.org/10.1214/07-EJS115 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …