14 research outputs found

    Optimal Convergence Rates for Tikhonov Regularization in Besov Scales

    Full text link
    In this paper we deal with linear inverse problems and convergence rates for Tikhonov regularization. We consider regularization in a scale of Banach spaces, namely the scale of Besov spaces. We show that regularization in Banach scales differs from regularization in Hilbert scales in the sense that it is possible that stronger source conditions may lead to weaker convergence rates and vive versa. Moreover, we present optimal source conditions for regularization in Besov scales

    Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion

    Full text link
    The orthogonal matching pursuit (OMP) is an algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general and in particular for two deconvolution examples from mass spectrometry and digital holography respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transfered to ill-posed inverse problems since here the atoms are typically far from orthogonal: The ill-posedness of the operator causes that the correlation of two distinct atoms probably gets huge, i.e. that two atoms can look much alike. Therefore one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for exact recovery of the support of noisy signals. In the two examples in mass spectrometry and digital holography we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography our analysis may be regarded as a first step to calculate the resolution power of droplet holography

    Sparse Regularization with lql^q Penalty Term

    Full text link
    We consider the stable approximation of sparse solutions to non-linear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O(δ)O(\sqrt{\delta}) of the regularized solutions in dependence of the noise level δ\delta. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ)O(\delta).Comment: 15 page

    Sparsity and Compressed Sensing in Inverse Problems

    Full text link
    This chapter is concerned with two important topics in the context of sparse recovery in inverse and ill-posed problems. In first part we elaborate condi-tions for exact recovery. In particular, we describe how both `1-minimization and matching pursuit methods can be used to regularize ill-posed problems and more-over, state conditions which guarantee exact recovery of the support in the sparse case. The focus of the second part is on the incomplete data scenario. We discuss ex-tensions of compressed sensing for specific infinite dimensional ill-posed measure-ment regimes. We are able to establish recovery error estimates when adequately relating the isometry constant of the sensing operator, the ill-posedness of the un-derlying model operator and the regularization parameter. Finally, we very briefly sketch how projected steepest descent iterations can be applied to retrieve the sparse solution

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore