169 research outputs found

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Convex regularization in statistical inverse learning problems

    Get PDF
    We consider a statistical inverse learning problem, where the task is to estimate a function ff based on noisy point evaluations of AfAf, where AA is a linear operator. The function AfAf is evaluated at i.i.d. random design points unu_n, n=1,...,Nn=1,...,N generated by an unknown general probability distribution. We consider Tikhonov regularization with general convex and p−p-homogeneous penalty functionals and derive concentration rates of the regularized solution to the ground truth measured in the symmetric Bregman distance induced by the penalty functional. We derive concrete rates for Besov norm penalties and numerically demonstrate the correspondence with the observed rates in the context of X-ray tomography

    Error analysis for filtered back projection reconstructions in Besov spaces

    Full text link
    Filtered back projection (FBP) methods are the most widely used reconstruction algorithms in computerized tomography (CT). The ill-posedness of this inverse problem allows only an approximate reconstruction for given noisy data. Studying the resulting reconstruction error has been a most active field of research in the 1990s and has recently been revived in terms of optimal filter design and estimating the FBP approximation errors in general Sobolev spaces. However, the choice of Sobolev spaces is suboptimal for characterizing typical CT reconstructions. A widely used model are sums of characteristic functions, which are better modelled in terms of Besov spaces Bqα,p(R2)\mathrm{B}^{\alpha,p}_q(\mathbb{R}^2). In particular B1α,1(R2)\mathrm{B}^{\alpha,1}_1(\mathbb{R}^2) with α≈1\alpha \approx 1 is a preferred model in image analysis for describing natural images. In case of noisy Radon data the total FBP reconstruction error ∄f−fLΎ∄≀∄f−fL∄+∄fL−fLΎ∄\|f-f_L^\delta\| \le \|f-f_L\|+ \|f_L - f_L^\delta\| splits into an approximation error and a data error, where LL serves as regularization parameter. In this paper, we study the approximation error of FBP reconstructions for target functions f∈L1(R2)∩Bqα,p(R2)f \in \mathrm{L}^1(\mathbb{R}^2) \cap \mathrm{B}^{\alpha,p}_q(\mathbb{R}^2) with positive α∈̞N\alpha \not\in \mathbb{N} and 1≀p,q≀∞1 \leq p,q \leq \infty. We prove that the Lp\mathrm{L}^p-norm of the inherent FBP approximation error f−fLf-f_L can be bounded above by \begin{equation*} \|f - f_L\|_{\mathrm{L}^p(\mathbb{R}^2)} \leq c_{\alpha,q,W} \, L^{-\alpha} \, |f|_{\mathrm{B}^{\alpha,p}_q(\mathbb{R}^2)} \end{equation*} under suitable assumptions on the utilized low-pass filter's window function WW. This then extends by classical methods to estimates for the total reconstruction error.Comment: 32 pages, 8 figure

    Wavelet regression estimation in nonparametric mixed effect models

    Get PDF
    AbstractWe show that a nonparametric estimator of a regression function, obtained as solution of a specific regularization problem is the best linear unbiased predictor in some nonparametric mixed effect model. Since this estimator is intractable from a numerical point of view, we propose a tight approximation of it easy and fast to implement. This second estimator achieves the usual optimal rate of convergence of the mean integrated squared error over a Sobolev class both for equispaced and nonequispaced design. Numerical experiments are presented both on simulated and ERP real data

    Learning Theory and Approximation

    Get PDF
    Learning theory studies data structures from samples and aims at understanding unknown function relations behind them. This leads to interesting theoretical problems which can be often attacked with methods from Approximation Theory. This workshop - the second one of this type at the MFO - has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning
    • 

    corecore