569 research outputs found
Continuous-Domain Solutions of Linear Inverse Problems with Tikhonov vs. Generalized TV Regularization
We consider linear inverse problems that are formulated in the continuous
domain. The object of recovery is a function that is assumed to minimize a
convex objective functional. The solutions are constrained by imposing a
continuous-domain regularization. We derive the parametric form of the solution
(representer theorems) for Tikhonov (quadratic) and generalized total-variation
(gTV) regularizations. We show that, in both cases, the solutions are splines
that are intimately related to the regularization operator. In the Tikhonov
case, the solution is smooth and constrained to live in a fixed subspace that
depends on the measurement operator. By contrast, the gTV regularization
results in a sparse solution composed of only a few dictionary elements that
are upper-bounded by the number of measurements and independent of the
measurement operator. Our findings for the gTV regularization resonates with
the minimization of the norm, which is its discrete counterpart and also
produces sparse solutions. Finally, we find the experimental solutions for some
measurement models in one dimension. We discuss the special case when the gTV
regularization results in multiple solutions and devise an algorithm to find an
extreme point of the solution set which is guaranteed to be sparse
Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography
Total variation (TV) is a powerful regularization method that has been widely
applied in different imaging applications, but is difficult to apply to diffuse
optical tomography (DOT) image reconstruction (inverse problem) due to complex
and unstructured geometries, non-linearity of the data fitting and
regularization terms, and non-differentiability of the regularization term. We
develop several approaches to overcome these difficulties by: i) defining
discrete differential operators for unstructured geometries using both finite
element and graph representations; ii) developing an optimization algorithm
based on the alternating direction method of multipliers (ADMM) for the
non-differentiable and non-linear minimization problem; iii) investigating
isotropic and anisotropic variants of TV regularization, and comparing their
finite element- and graph-based implementations. These approaches are evaluated
on experiments on simulated data and real data acquired from a tissue phantom.
Our results show that both FEM and graph-based TV regularization is able to
accurately reconstruct both sparse and non-sparse distributions without the
over-smoothing effect of Tikhonov regularization and the over-sparsifying
effect of L regularization. The graph representation was found to
out-perform the FEM method for low-resolution meshes, and the FEM method was
found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and
improved clarit
Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems
This paper deals with the resolution of inverse problems in a periodic
setting or, in other terms, the reconstruction of periodic continuous-domain
signals from their noisy measurements. We focus on two reconstruction
paradigms: variational and statistical. In the variational approach, the
reconstructed signal is solution to an optimization problem that establishes a
tradeoff between fidelity to the data and smoothness conditions via a quadratic
regularization associated to a linear operator. In the statistical approach,
the signal is modeled as a stationary random process defined from a Gaussian
white noise and a whitening operator; one then looks for the optimal estimator
in the mean-square sense. We give a generic form of the reconstructed signals
for both approaches, allowing for a rigorous comparison of the two.We fully
characterize the conditions under which the two formulations yield the same
solution, which is a periodic spline in the case of sampling measurements. We
also show that this equivalence between the two approaches remains valid on
simulations for a broad class of problems. This extends the practical range of
applicability of the variational method
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Direct and Inverse Computational Methods for Electromagnetic Scattering in Biological Diagnostics
Scattering theory has had a major roll in twentieth century mathematical
physics. Mathematical modeling and algorithms of direct,- and inverse
electromagnetic scattering formulation due to biological tissues are
investigated. The algorithms are used for a model based illustration technique
within the microwave range. A number of methods is given to solve the inverse
electromagnetic scattering problem in which the nonlinear and ill-posed nature
of the problem are acknowledged.Comment: 61 pages, 5 figure
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …