83 research outputs found
Bregman Cost for Non-Gaussian Noise
One of the tasks of the Bayesian inverse problem is to find a good estimate
based on the posterior probability density. The most common point estimators
are the conditional mean (CM) and maximum a posteriori (MAP) estimates, which
correspond to the mean and the mode of the posterior, respectively. From a
theoretical point of view it has been argued that the MAP estimate is only in
an asymptotic sense a Bayes estimator for the uniform cost function, while the
CM estimate is a Bayes estimator for the means squared cost function. Recently,
it has been proven that the MAP estimate is a proper Bayes estimator for the
Bregman cost if the image is corrupted by Gaussian noise. In this work we
extend this result to other noise models with log-concave likelihood density,
by introducing two related Bregman cost functions for which the CM and the MAP
estimates are proper Bayes estimators. Moreover, we also prove that the CM
estimate outperforms the MAP estimate, when the error is measured in a certain
Bregman distance, a result previously unknown also in the case of additive
Gaussian noise
Tomographic Reconstruction Methods for Decomposing Directional Components
Decomposition of tomographic reconstructions has many different practical
application. We propose two new reconstruction methods that combines the task
of tomographic reconstruction with object decomposition. We demonstrate these
reconstruction methods in the context of decomposing directional objects into
various directional components. Furthermore we propose a method for estimating
the main direction in a directional object, directly from the measured computed
tomography data. We demonstrate all the proposed methods on simulated and real
samples to show their practical applicability. The numerical tests show that
decomposition and reconstruction can combined to achieve a highly useful
fibre-crack decomposition
Contrast Invariant SNR
We design an image quality measure independent of local contrast changes, which constitute simple models of illumination changes. Given two images, the algorithm provides the image closest to the first one with the component tree of the second. This problem can be cast as a specific convex program called isotonic regression. We provide a few analytic properties of the solutions to this problem. We also design a tailored first order optimization procedure together with a full complexity analysis. The proposed method turns out to be practically more efficient and reliable than the best existing algorithms based on interior point methods. The algorithm has potential applications in change detection, color image processing or image fusion. A Matlab implementation is available at http://www.math.univ-toulouse.fr/ ⌠weiss/PageCodes.html
Fixing Nonconvergence of Algebraic Iterative Reconstruction with an Unmatched Backprojector
We consider algebraic iterative reconstruction methods with applications in
image reconstruction. In particular, we are concerned with methods based on an
unmatched projector/backprojector pair; i.e., the backprojector is not the
exact adjoint or transpose of the forward projector. Such situations are common
in large-scale computed tomography, and we consider the common situation where
the method does not converge due to the nonsymmetry of the iteration matrix. We
propose a modified algorithm that incorporates a small shift parameter, and we
give the conditions that guarantee convergence of this method to a fixed point
of a slightly perturbed problem. We also give perturbation bounds for this
fixed point. Moreover, we discuss how to use Krylov subspace methods to
efficiently estimate the leftmost eigenvalue of a certain matrix to select a
proper shift parameter. The modified algorithm is illustrated with test
problems from computed tomography
Horseshoe priors for edge-preserving linear Bayesian inversion
In many large-scale inverse problems, such as computed tomography and image
deblurring, characterization of sharp edges in the solution is desired. Within
the Bayesian approach to inverse problems, edge-preservation is often achieved
using Markov random field priors based on heavy-tailed distributions. Another
strategy, popular in statistics, is the application of hierarchical shrinkage
priors. An advantage of this formulation lies in expressing the prior as a
conditionally Gaussian distribution depending of global and local
hyperparameters which are endowed with heavy-tailed hyperpriors. In this work,
we revisit the shrinkage horseshoe prior and introduce its formulation for
edge-preserving settings. We discuss a sampling framework based on the Gibbs
sampler to solve the resulting hierarchical formulation of the Bayesian inverse
problem. In particular, one of the conditional distributions is
high-dimensional Gaussian, and the rest are derived in closed form by using a
scale mixture representation of the heavy-tailed hyperpriors. Applications from
imaging science show that our computational procedure is able to compute sharp
edge-preserving posterior point estimates with reduced uncertainty
Sparse Bayesian Inference with Regularized Gaussian Distributions
Regularization is a common tool in variational inverse problems to impose
assumptions on the parameters of the problem. One such assumption is sparsity,
which is commonly promoted using lasso and total variation-like regularization.
Although the solutions to many such regularized inverse problems can be
considered as points of maximum probability of well-chosen posterior
distributions, samples from these distributions are generally not sparse. In
this paper, we present a framework for implicitly defining a probability
distribution that combines the effects of sparsity imposing regularization with
Gaussian distributions. Unlike continuous distributions, these implicit
distributions can assign positive probability to sparse vectors. We study these
regularized distributions for various regularization functions including total
variation regularization and piecewise linear convex functions. We apply the
developed theory to uncertainty quantification for Bayesian linear inverse
problems and derive a Gibbs sampler for a Bayesian hierarchical model. To
illustrate the difference between our sparsity-inducing framework and
continuous distributions, we apply our framework to small-scale deblurring and
computed tomography examples
- âŠ