2,809 research outputs found
Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints
The Tikhonov regularization of linear ill-posed problems with an
penalty is considered. We recall results for linear convergence rates and
results on exact recovery of the support. Moreover, we derive conditions for
exact support recovery which are especially applicable in the case of ill-posed
problems, where other conditions, e.g. based on the so-called coherence or the
restricted isometry property are usually not applicable. The obtained results
also show that the regularized solutions do not only converge in the
-norm but also in the vector space (when considered as the
strict inductive limit of the spaces as tends to infinity).
Additionally, the relations between different conditions for exact support
recovery and linear convergence rates are investigated.
With an imaging example from digital holography the applicability of the
obtained results is illustrated, i.e. that one may check a priori if the
experimental setup guarantees exact recovery with Tikhonov regularization with
sparsity constraints
Compressive Sensing of Signals Generated in Plastic Scintillators in a Novel J-PET Instrument
The J-PET scanner, which allows for single bed imaging of the whole human
body, is currently under development at the Jagiellonian University. The dis-
cussed detector offers improvement of the Time of Flight (TOF) resolution due
to the use of fast plastic scintillators and dedicated electronics allowing for
sam- pling in the voltage domain of signals with durations of few nanoseconds.
In this paper we show that recovery of the whole signal, based on only a few
samples, is possible. In order to do that, we incorporate the training signals
into the Tikhonov regularization framework and we perform the Principal
Component Analysis decomposition, which is well known for its compaction
properties. The method yields a simple closed form analytical solution that
does not require iter- ative processing. Moreover, from the Bayes theory the
properties of regularized solution, especially its covariance matrix, may be
easily derived. This is the key to introduce and prove the formula for
calculations of the signal recovery error. In this paper we show that an
average recovery error is approximately inversely proportional to the number of
acquired samples
Jump-sparse and sparse recovery using Potts functionals
We recover jump-sparse and sparse signals from blurred incomplete data
corrupted by (possibly non-Gaussian) noise using inverse Potts energy
functionals. We obtain analytical results (existence of minimizers, complexity)
on inverse Potts functionals and provide relations to sparsity problems. We
then propose a new optimization method for these functionals which is based on
dynamic programming and the alternating direction method of multipliers (ADMM).
A series of experiments shows that the proposed method yields very satisfactory
jump-sparse and sparse reconstructions, respectively. We highlight the
capability of the method by comparing it with classical and recent approaches
such as TV minimization (jump-sparse signals), orthogonal matching pursuit,
iterative hard thresholding, and iteratively reweighted minimization
(sparse signals)
Exploiting Structural Complexity for Robust and Rapid Hyperspectral Imaging
This paper presents several strategies for spectral de-noising of
hyperspectral images and hypercube reconstruction from a limited number of
tomographic measurements. In particular we show that the non-noisy spectral
data, when stacked across the spectral dimension, exhibits low-rank. On the
other hand, under the same representation, the spectral noise exhibits a banded
structure. Motivated by this we show that the de-noised spectral data and the
unknown spectral noise and the respective bands can be simultaneously estimated
through the use of a low-rank and simultaneous sparse minimization operation
without prior knowledge of the noisy bands. This result is novel for for
hyperspectral imaging applications. In addition, we show that imaging for the
Computed Tomography Imaging Systems (CTIS) can be improved under limited angle
tomography by using low-rank penalization. For both of these cases we exploit
the recent results in the theory of low-rank matrix completion using nuclear
norm minimization
The Residual Method for Regularizing Ill-Posed Problems
Although the \emph{residual method}, or \emph{constrained regularization}, is
frequently used in applications, a detailed study of its properties is still
missing. This sharply contrasts the progress of the theory of Tikhonov
regularization, where a series of new results for regularization in Banach
spaces has been published in the recent years. The present paper intends to
bridge the gap between the existing theories as far as possible. We develop a
stability and convergence theory for the residual method in general topological
spaces. In addition, we prove convergence rates in terms of (generalized)
Bregman distances, which can also be applied to non-convex regularization
functionals. We provide three examples that show the applicability of our
theory. The first example is the regularized solution of linear operator
equations on -spaces, where we show that the results of Tikhonov
regularization generalize unchanged to the residual method. As a second
example, we consider the problem of density estimation from a finite number of
sampling points, using the Wasserstein distance as a fidelity term and an
entropy measure as regularization term. It is shown that the densities obtained
in this way depend continuously on the location of the sampled points and that
the underlying density can be recovered as the number of sampling points tends
to infinity. Finally, we apply our theory to compressed sensing. Here, we show
the well-posedness of the method and derive convergence rates both for convex
and non-convex regularization under rather weak conditions.Comment: 29 pages, one figur
- …