55 research outputs found
Blockwise SVD with error in the operator and application to blind deconvolution
We consider linear inverse problems in a nonparametric statistical framework.
Both the signal and the operator are unknown and subject to error measurements.
We establish minimax rates of convergence under squared error loss when the
operator admits a blockwise singular value decomposition (blockwise SVD) and
the smoothness of the signal is measured in a Sobolev sense. We construct a
nonlinear procedure adapting simultaneously to the unknown smoothness of both
the signal and the operator and achieving the optimal rate of convergence to
within logarithmic terms. When the noise level in the operator is dominant, by
taking full advantage of the blockwise SVD property, we demonstrate that the
block SVD procedure overperforms classical methods based on Galerkin projection
or nonlinear wavelet thresholding. We subsequently apply our abstract framework
to the specific case of blind deconvolution on the torus and on the sphere
Recovering edges in ill-posed inverse problems: optimality of curvelet frames
We consider a model problem of recovering a function from noisy Radon data. The function to be recovered is assumed smooth apart from a discontinuity along a curve, that is, an edge. We use the continuum white-noise model, with noise level .
Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level only as as . A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to . However, as we show here, this rate is not optimal.
In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain.
We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE as noise level . This rate of convergence holds uniformly over a class of functions which are except for discontinuities along curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example
Poisson inverse problems
In this paper we focus on nonparametric estimators in inverse problems for
Poisson processes involving the use of wavelet decompositions. Adopting an
adaptive wavelet Galerkin discretization, we find that our method combines the
well-known theoretical advantages of wavelet--vaguelette decompositions for
inverse problems in terms of optimally adapting to the unknown smoothness of
the solution, together with the remarkably simple closed-form expressions of
Galerkin inversion methods. Adapting the results of Barron and Sheu [Ann.
Statist. 19 (1991) 1347--1369] to the context of log-intensity functions
approximated by wavelet series with the use of the Kullback--Leibler distance
between two point processes, we also present an asymptotic analysis of
convergence rates that justifies our approach. In order to shed some light on
the theoretical results obtained and to examine the accuracy of our estimates
in finite samples, we illustrate our method by the analysis of some simulated
examples.Comment: Published at http://dx.doi.org/10.1214/009053606000000687 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Very High Dimensional Semiparametric Models
Very high dimensional semiparametric models play a major role in many areas, in particular in signal detection problems when sparse signals or sparse events are hidden among high dimensional noise. Concrete examples are genomic studies in biostatistics or imaging problems. In a broad context all kind of statistical inference and model selection problems were discussed for high dimensional data
A SURE Approach for Digital Signal/Image Deconvolution Problems
In this paper, we are interested in the classical problem of restoring data
degraded by a convolution and the addition of a white Gaussian noise. The
originality of the proposed approach is two-fold. Firstly, we formulate the
restoration problem as a nonlinear estimation problem leading to the
minimization of a criterion derived from Stein's unbiased quadratic risk
estimate. Secondly, the deconvolution procedure is performed using any analysis
and synthesis frames that can be overcomplete or not. New theoretical results
concerning the calculation of the variance of the Stein's risk estimate are
also provided in this work. Simulations carried out on natural images show the
good performance of our method w.r.t. conventional wavelet-based restoration
methods
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- âŠ