23 research outputs found

    Elastic-Net Regularization: Error estimates and Active Set Methods

    Full text link
    This paper investigates theoretical properties and efficient numerical algorithms for the so-called elastic-net regularization originating from statistics, which enforces simultaneously l^1 and l^2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates for both a priori and a posteriori parameter choice rules are established. Two iterative numerical algorithms of active set type are proposed, and their convergence properties are discussed. Numerical results are presented to illustrate the features of the functional and algorithms

    Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion

    Full text link
    The orthogonal matching pursuit (OMP) is an algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general and in particular for two deconvolution examples from mass spectrometry and digital holography respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transfered to ill-posed inverse problems since here the atoms are typically far from orthogonal: The ill-posedness of the operator causes that the correlation of two distinct atoms probably gets huge, i.e. that two atoms can look much alike. Therefore one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for exact recovery of the support of noisy signals. In the two examples in mass spectrometry and digital holography we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography our analysis may be regarded as a first step to calculate the resolution power of droplet holography

    Optimal Convergence Rates for Tikhonov Regularization in Besov Scales

    Full text link
    In this paper we deal with linear inverse problems and convergence rates for Tikhonov regularization. We consider regularization in a scale of Banach spaces, namely the scale of Besov spaces. We show that regularization in Banach scales differs from regularization in Hilbert scales in the sense that it is possible that stronger source conditions may lead to weaker convergence rates and vive versa. Moreover, we present optimal source conditions for regularization in Besov scales

    3D Reconstruction for Partial Data Electrical Impedance Tomography Using a Sparsity Prior

    Get PDF
    In electrical impedance tomography the electrical conductivity inside a physical body is computed from electro-static boundary measurements. The focus of this paper is to extend recent result for the 2D problem to 3D. Prior information about the sparsity and spatial distribution of the conductivity is used to improve reconstructions for the partial data problem with Cauchy data measured only on a subset of the boundary. A sparsity prior is enforced using the â„“1\ell_1 norm in the penalty term of a Tikhonov functional, and spatial prior information is incorporated by applying a spatially distributed regularization parameter. The optimization problem is solved numerically using a generalized conditional gradient method with soft thresholding. Numerical examples show the effectiveness of the suggested method even for the partial data problem with measurements affected by noise.Comment: 10 pages, 3 figures. arXiv admin note: substantial text overlap with arXiv:1405.655

    Sparse Regularization with lql^q Penalty Term

    Full text link
    We consider the stable approximation of sparse solutions to non-linear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O(δ)O(\sqrt{\delta}) of the regularized solutions in dependence of the noise level δ\delta. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ)O(\delta).Comment: 15 page

    Sparsity and Compressed Sensing in Inverse Problems

    Full text link
    This chapter is concerned with two important topics in the context of sparse recovery in inverse and ill-posed problems. In first part we elaborate condi-tions for exact recovery. In particular, we describe how both `1-minimization and matching pursuit methods can be used to regularize ill-posed problems and more-over, state conditions which guarantee exact recovery of the support in the sparse case. The focus of the second part is on the incomplete data scenario. We discuss ex-tensions of compressed sensing for specific infinite dimensional ill-posed measure-ment regimes. We are able to establish recovery error estimates when adequately relating the isometry constant of the sensing operator, the ill-posedness of the un-derlying model operator and the regularization parameter. Finally, we very briefly sketch how projected steepest descent iterations can be applied to retrieve the sparse solution
    corecore