340 research outputs found
Solution of linear ill-posed problems using overcomplete dictionaries
In the present paper we consider application of overcomplete dictionaries to
solution of general ill-posed linear inverse problems. Construction of an
adaptive optimal solution for such problems usually relies either on a singular
value decomposition or representation of the solution via an orthonormal basis.
The shortcoming of both approaches lies in the fact that, in many situations,
neither the eigenbasis of the linear operator nor a standard orthonormal basis
constitutes an appropriate collection of functions for sparse representation of
the unknown function. In the context of regression problems, there have been an
enormous amount of effort to recover an unknown function using an overcomplete
dictionary. One of the most popular methods, Lasso, is based on minimizing the
empirical likelihood and requires stringent assumptions on the dictionary, the,
so called, compatibility conditions. While these conditions may be satisfied
for the original dictionary functions, they usually do not hold for their
images due to contraction imposed by the linear operator. In what follows, we
bypass this difficulty by a novel approach which is based on inverting each of
the dictionary functions and matching the resulting expansion to the true
function, thus, avoiding unrealistic assumptions on the dictionary and using
Lasso in a predictive setting. We examine both the white noise and the
observational model formulations and also discuss how exact inverse images of
the dictionary functions can be replaced by their approximate counterparts.
Furthermore, we show how the suggested methodology can be extended to the
problem of estimation of a mixing density in a continuous mixture. For all the
situations listed above, we provide the oracle inequalities for the risk in a
finite sample setting. Simulation studies confirm good computational properties
of the Lasso-based technique
Recovering edges in ill-posed inverse problems: optimality of curvelet frames
We consider a model problem of recovering a function from noisy Radon data. The function to be recovered is assumed smooth apart from a discontinuity along a curve, that is, an edge. We use the continuum white-noise model, with noise level .
Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level only as as . A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to . However, as we show here, this rate is not optimal.
In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain.
We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE as noise level . This rate of convergence holds uniformly over a class of functions which are except for discontinuities along curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example
- …