16,243 research outputs found
Bias-Reduction in Variational Regularization
The aim of this paper is to introduce and study a two-step debiasing method
for variational regularization. After solving the standard variational problem,
the key idea is to add a consecutive debiasing step minimizing the data
fidelity on an appropriate set, the so-called model manifold. The latter is
defined by Bregman distances or infimal convolutions thereof, using the
(uniquely defined) subgradient appearing in the optimality condition of the
variational method. For particular settings, such as anisotropic and
TV-type regularization, previously used debiasing techniques are shown to be
special cases. The proposed approach is however easily applicable to a wider
range of regularizations. The two-step debiasing is shown to be well-defined
and to optimally reduce bias in a certain setting.
In addition to visual and PSNR-based evaluations, different notions of bias
and variance decompositions are investigated in numerical studies. The
improvements offered by the proposed scheme are demonstrated and its
performance is shown to be comparable to optimal results obtained with Bregman
iterations.Comment: Accepted by JMI
Simultaneous use of Individual and Joint Regularization Terms in Compressive Sensing: Joint Reconstruction of Multi-Channel Multi-Contrast MRI Acquisitions
Purpose: A time-efficient strategy to acquire high-quality multi-contrast
images is to reconstruct undersampled data with joint regularization terms that
leverage common information across contrasts. However, these terms can cause
leakage of uncommon features among contrasts, compromising diagnostic utility.
The goal of this study is to develop a compressive sensing method for
multi-channel multi-contrast magnetic resonance imaging (MRI) that optimally
utilizes shared information while preventing feature leakage.
Theory: Joint regularization terms group sparsity and colour total variation
are used to exploit common features across images while individual sparsity and
total variation are also used to prevent leakage of distinct features across
contrasts. The multi-channel multi-contrast reconstruction problem is solved
via a fast algorithm based on Alternating Direction Method of Multipliers.
Methods: The proposed method is compared against using only individual and
only joint regularization terms in reconstruction. Comparisons were performed
on single-channel simulated and multi-channel in-vivo datasets in terms of
reconstruction quality and neuroradiologist reader scores.
Results: The proposed method demonstrates rapid convergence and improved
image quality for both simulated and in-vivo datasets. Furthermore, while
reconstructions that solely use joint regularization terms are prone to
leakage-of-features, the proposed method reliably avoids leakage via
simultaneous use of joint and individual terms.
Conclusion: The proposed compressive sensing method performs fast
reconstruction of multi-channel multi-contrast MRI data with improved image
quality. It offers reliability against feature leakage in joint
reconstructions, thereby holding great promise for clinical use.Comment: 13 pages, 13 figures. Submitted for possible publicatio
A combined first and second order variational approach for image reconstruction
In this paper we study a variational problem in the space of functions of
bounded Hessian. Our model constitutes a straightforward higher-order extension
of the well known ROF functional (total variation minimisation) to which we add
a non-smooth second order regulariser. It combines convex functions of the
total variation and the total variation of the first derivatives. In what
follows, we prove existence and uniqueness of minimisers of the combined model
and present the numerical solution of the corresponding discretised problem by
employing the split Bregman method. The paper is furnished with applications of
our model to image denoising, deblurring as well as image inpainting. The
obtained numerical results are compared with results obtained from total
generalised variation (TGV), infimal convolution and Euler's elastica, three
other state of the art higher-order models. The numerical discussion confirms
that the proposed higher-order model competes with models of its kind in
avoiding the creation of undesirable artifacts and blocky-like structures in
the reconstructed images -- a known disadvantage of the ROF model -- while
being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …