169 research outputs found
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
We consider linear inverse problems where the solution is assumed to have a
sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that
replacing the usual quadratic regularizing penalties by weighted l^p-penalties
on the coefficients of such expansions, with 1 < or = p < or =2, still
regularizes the problem. If p < 2, regularized solutions of such l^p-penalized
problems will have sparser expansions, with respect to the basis under
consideration. To compute the corresponding regularized solutions we propose an
iterative algorithm that amounts to a Landweber iteration with thresholding (or
nonlinear shrinkage) applied at each iteration step. We prove that this
algorithm converges in norm. We also review some potential applications of this
method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1:
small correction in proof (but not statement of) lemma 3.15; description of
Besov spaces in intro and app A clarified (and corrected); smaller pointsize
(making 30 instead of 38 pages
Convex regularization in statistical inverse learning problems
We consider a statistical inverse learning problem, where the task is to estimate a function based on noisy point evaluations of , where is a linear operator. The function is evaluated at i.i.d. random design points , generated by an unknown general probability distribution. We consider Tikhonov regularization with general convex and homogeneous penalty functionals and derive concentration rates of the regularized solution to the ground truth measured in the symmetric Bregman distance induced by the penalty functional. We derive concrete rates for Besov norm penalties and numerically demonstrate the correspondence with the observed rates in the context of X-ray tomography
Error analysis for filtered back projection reconstructions in Besov spaces
Filtered back projection (FBP) methods are the most widely used
reconstruction algorithms in computerized tomography (CT). The ill-posedness of
this inverse problem allows only an approximate reconstruction for given noisy
data. Studying the resulting reconstruction error has been a most active field
of research in the 1990s and has recently been revived in terms of optimal
filter design and estimating the FBP approximation errors in general Sobolev
spaces.
However, the choice of Sobolev spaces is suboptimal for characterizing
typical CT reconstructions. A widely used model are sums of characteristic
functions, which are better modelled in terms of Besov spaces
. In particular
with is a preferred
model in image analysis for describing natural images.
In case of noisy Radon data the total FBP reconstruction error
splits into an
approximation error and a data error, where serves as regularization
parameter. In this paper, we study the approximation error of FBP
reconstructions for target functions with positive and . We prove that the -norm
of the inherent FBP approximation error can be bounded above by
\begin{equation*} \|f - f_L\|_{\mathrm{L}^p(\mathbb{R}^2)} \leq c_{\alpha,q,W}
\, L^{-\alpha} \, |f|_{\mathrm{B}^{\alpha,p}_q(\mathbb{R}^2)} \end{equation*}
under suitable assumptions on the utilized low-pass filter's window function
. This then extends by classical methods to estimates for the total
reconstruction error.Comment: 32 pages, 8 figure
Wavelet regression estimation in nonparametric mixed effect models
AbstractWe show that a nonparametric estimator of a regression function, obtained as solution of a specific regularization problem is the best linear unbiased predictor in some nonparametric mixed effect model. Since this estimator is intractable from a numerical point of view, we propose a tight approximation of it easy and fast to implement. This second estimator achieves the usual optimal rate of convergence of the mean integrated squared error over a Sobolev class both for equispaced and nonequispaced design. Numerical experiments are presented both on simulated and ERP real data
Learning Theory and Approximation
Learning theory studies data structures from samples and aims at understanding unknown function relations behind them. This leads to interesting theoretical problems which can be often attacked with methods from Approximation Theory. This workshop - the second one of this type at the MFO - has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning
- âŠ