547 research outputs found
Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints
The Tikhonov regularization of linear ill-posed problems with an
penalty is considered. We recall results for linear convergence rates and
results on exact recovery of the support. Moreover, we derive conditions for
exact support recovery which are especially applicable in the case of ill-posed
problems, where other conditions, e.g. based on the so-called coherence or the
restricted isometry property are usually not applicable. The obtained results
also show that the regularized solutions do not only converge in the
-norm but also in the vector space (when considered as the
strict inductive limit of the spaces as tends to infinity).
Additionally, the relations between different conditions for exact support
recovery and linear convergence rates are investigated.
With an imaging example from digital holography the applicability of the
obtained results is illustrated, i.e. that one may check a priori if the
experimental setup guarantees exact recovery with Tikhonov regularization with
sparsity constraints
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion
The orthogonal matching pursuit (OMP) is an algorithm to solve sparse
approximation problems. Sufficient conditions for exact recovery are known with
and without noise. In this paper we investigate the applicability of the OMP
for the solution of ill-posed inverse problems in general and in particular for
two deconvolution examples from mass spectrometry and digital holography
respectively.
In sparse approximation problems one often has to deal with the problem of
redundancy of a dictionary, i.e. the atoms are not linearly independent.
However, one expects them to be approximatively orthogonal and this is
quantified by the so-called incoherence. This idea cannot be transfered to
ill-posed inverse problems since here the atoms are typically far from
orthogonal: The ill-posedness of the operator causes that the correlation of
two distinct atoms probably gets huge, i.e. that two atoms can look much alike.
Therefore one needs conditions which take the structure of the problem into
account and work without the concept of coherence. In this paper we develop
results for exact recovery of the support of noisy signals. In the two examples
in mass spectrometry and digital holography we show that our results lead to
practically relevant estimates such that one may check a priori if the
experimental setup guarantees exact deconvolution with OMP. Especially in the
example from digital holography our analysis may be regarded as a first step to
calculate the resolution power of droplet holography
Convergence rates of general regularization methods for statistical inverse problems and applications
During the past the convergence analysis for linear statistical inverse problems has mainly focused on spectral cut-off and Tikhonov type estimators. Spectral cut-off estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute, but still involve the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as í-methods and the Landweber iteration. The latter estimators converge at the same rate as spectral cut-off, but only require matrixvector products. Our results are applied to various problems, in particular we obtain precise convergence rates for satellite gradiometry, L2-boosting, and errors in variable problems. --Statistical inverse problems,iterative regularization methods,Tikhonov regularization,nonparametric regression,minimax convergence rates,satellite gradiometry,Hilbert scales,boosting,errors in variable
RBF-Based Partition of Unity Methods for Elliptic PDEs: Adaptivity and Stability Issues Via Variably Scaled Kernels
We investigate adaptivity issues for the approximation of Poisson equations via radial basis
function-based partition of unity collocation. The adaptive residual subsampling approach
is performed with quasi-uniform node sequences leading to a flexible tool which however
might suffer from numerical instability due to ill-conditioning of the collocation matrices.
We thus develop a hybrid method which makes use of the so-called variably scaled kernels.
The proposed algorithm numerically ensures the convergence of the adaptive procedure
- …