313 research outputs found
Maximum-a-posteriori estimation with Bayesian confidence regions
Solutions to inverse problems that are ill-conditioned or ill-posed may have
significant intrinsic uncertainty. Unfortunately, analysing and quantifying
this uncertainty is very challenging, particularly in high-dimensional
problems. As a result, while most modern mathematical imaging methods produce
impressive point estimation results, they are generally unable to quantify the
uncertainty in the solutions delivered. This paper presents a new general
methodology for approximating Bayesian high-posterior-density credibility
regions in inverse problems that are convex and potentially very
high-dimensional. The approximations are derived by using recent concentration
of measure results related to information theory for log-concave random
vectors. A remarkable property of the approximations is that they can be
computed very efficiently, even in large-scale problems, by using standard
convex optimisation techniques. In particular, they are available as a
by-product in problems solved by maximum-a-posteriori estimation. The
approximations also have favourable theoretical properties, namely they
outer-bound the true high-posterior-density credibility regions, and they are
stable with respect to model dimension. The proposed methodology is illustrated
on two high-dimensional imaging inverse problems related to tomographic
reconstruction and sparse deconvolution, where the approximations are used to
perform Bayesian hypothesis tests and explore the uncertainty about the
solutions, and where proximal Markov chain Monte Carlo algorithms are used as
benchmark to compute exact credible regions and measure the approximation
error
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Efficient variational inference in large-scale Bayesian compressed sensing
We study linear models under heavy-tailed priors from a probabilistic
viewpoint. Instead of computing a single sparse most probable (MAP) solution as
in standard deterministic approaches, the focus in the Bayesian compressed
sensing framework shifts towards capturing the full posterior distribution on
the latent variables, which allows quantifying the estimation uncertainty and
learning model parameters using maximum likelihood. The exact posterior
distribution under the sparse linear model is intractable and we concentrate on
variational Bayesian techniques to approximate it. Repeatedly computing
Gaussian variances turns out to be a key requisite and constitutes the main
computational bottleneck in applying variational techniques in large-scale
problems. We leverage on the recently proposed Perturb-and-MAP algorithm for
drawing exact samples from Gaussian Markov random fields (GMRF). The main
technical contribution of our paper is to show that estimating Gaussian
variances using a relatively small number of such efficiently drawn random
samples is much more effective than alternative general-purpose variance
estimation techniques. By reducing the problem of variance estimation to
standard optimization primitives, the resulting variational algorithms are
fully scalable and parallelizable, allowing Bayesian computations in extremely
large-scale problems with the same memory and time complexity requirements as
conventional point estimation techniques. We illustrate these ideas with
experiments in image deblurring.Comment: 8 pages, 3 figures, appears in Proc. IEEE Workshop on Information
Theory in Computer Vision and Pattern Recognition (in conjunction with
ICCV-11), Barcelona, Spain, Nov. 201
- …