3,379 research outputs found
Compressed Sensing MRI Reconstruction Regularized by VAEs with Structured Image Covariance
Objective: This paper investigates how generative models, trained on
ground-truth images, can be used \changes{as} priors for inverse problems,
penalizing reconstructions far from images the generator can produce. The aim
is that learned regularization will provide complex data-driven priors to
inverse problems while still retaining the control and insight of a variational
regularization method. Moreover, unsupervised learning, without paired training
data, allows the learned regularizer to remain flexible to changes in the
forward problem such as noise level, sampling pattern or coil sensitivities in
MRI.
Approach: We utilize variational autoencoders (VAEs) that generate not only
an image but also a covariance uncertainty matrix for each image. The
covariance can model changing uncertainty dependencies caused by structure in
the image, such as edges or objects, and provides a new distance metric from
the manifold of learned images.
Main results: We evaluate these novel generative regularizers on
retrospectively sub-sampled real-valued MRI measurements from the fastMRI
dataset. We compare our proposed learned regularization against other unlearned
regularization approaches and unsupervised and supervised deep learning
methods.
Significance: Our results show that the proposed method is competitive with
other state-of-the-art methods and behaves consistently with changing sampling
patterns and noise levels
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Solving ill-posed inverse problems using iterative deep neural networks
We propose a partially learned approach for the solution of ill posed inverse
problems with not necessarily linear forward operators. The method builds on
ideas from classical regularization theory and recent advances in deep learning
to perform learning while making use of prior information about the inverse
problem encoded in the forward operator, noise model and a regularizing
functional. The method results in a gradient-like iterative scheme, where the
"gradient" component is learned using a convolutional network that includes the
gradients of the data discrepancy and regularizer as input in each iteration.
We present results of such a partially learned gradient scheme on a non-linear
tomographic inversion problem with simulated data from both the Sheep-Logan
phantom as well as a head CT. The outcome is compared against FBP and TV
reconstruction and the proposed method provides a 5.4 dB PSNR improvement over
the TV reconstruction while being significantly faster, giving reconstructions
of 512 x 512 volumes in about 0.4 seconds using a single GPU
Bias-Reduction in Variational Regularization
The aim of this paper is to introduce and study a two-step debiasing method
for variational regularization. After solving the standard variational problem,
the key idea is to add a consecutive debiasing step minimizing the data
fidelity on an appropriate set, the so-called model manifold. The latter is
defined by Bregman distances or infimal convolutions thereof, using the
(uniquely defined) subgradient appearing in the optimality condition of the
variational method. For particular settings, such as anisotropic and
TV-type regularization, previously used debiasing techniques are shown to be
special cases. The proposed approach is however easily applicable to a wider
range of regularizations. The two-step debiasing is shown to be well-defined
and to optimally reduce bias in a certain setting.
In addition to visual and PSNR-based evaluations, different notions of bias
and variance decompositions are investigated in numerical studies. The
improvements offered by the proposed scheme are demonstrated and its
performance is shown to be comparable to optimal results obtained with Bregman
iterations.Comment: Accepted by JMI
- …