340 research outputs found

    Solution of linear ill-posed problems using overcomplete dictionaries

    Full text link
    In the present paper we consider application of overcomplete dictionaries to solution of general ill-posed linear inverse problems. Construction of an adaptive optimal solution for such problems usually relies either on a singular value decomposition or representation of the solution via an orthonormal basis. The shortcoming of both approaches lies in the fact that, in many situations, neither the eigenbasis of the linear operator nor a standard orthonormal basis constitutes an appropriate collection of functions for sparse representation of the unknown function. In the context of regression problems, there have been an enormous amount of effort to recover an unknown function using an overcomplete dictionary. One of the most popular methods, Lasso, is based on minimizing the empirical likelihood and requires stringent assumptions on the dictionary, the, so called, compatibility conditions. While these conditions may be satisfied for the original dictionary functions, they usually do not hold for their images due to contraction imposed by the linear operator. In what follows, we bypass this difficulty by a novel approach which is based on inverting each of the dictionary functions and matching the resulting expansion to the true function, thus, avoiding unrealistic assumptions on the dictionary and using Lasso in a predictive setting. We examine both the white noise and the observational model formulations and also discuss how exact inverse images of the dictionary functions can be replaced by their approximate counterparts. Furthermore, we show how the suggested methodology can be extended to the problem of estimation of a mixing density in a continuous mixture. For all the situations listed above, we provide the oracle inequalities for the risk in a finite sample setting. Simulation studies confirm good computational properties of the Lasso-based technique

    Recovering edges in ill-posed inverse problems: optimality of curvelet frames

    Get PDF
    We consider a model problem of recovering a function f(x1,x2)f(x_1,x_2) from noisy Radon data. The function ff to be recovered is assumed smooth apart from a discontinuity along a C2C^2 curve, that is, an edge. We use the continuum white-noise model, with noise level ε\varepsilon. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level ε\varepsilon only as O(ε1/2)O(\varepsilon^{1/2}) as ε→0\varepsilon\to 0. A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to O(ε2/3)O(\varepsilon^{2/3}). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain. We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE O(ε4/5)O(\varepsilon^{4/5}) as noise level ε→0\varepsilon\to 0. This rate of convergence holds uniformly over a class of functions which are C2C^2 except for discontinuities along C2C^2 curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example
    • …
    corecore