31 research outputs found
The degrees of freedom of the Lasso for general design matrix
In this paper, we investigate the degrees of freedom (\dof) of penalized
minimization (also known as the Lasso) for linear regression models.
We give a closed-form expression of the \dof of the Lasso response. Namely,
we show that for any given Lasso regularization parameter and any
observed data belonging to a set of full (Lebesgue) measure, the
cardinality of the support of a particular solution of the Lasso problem is an
unbiased estimator of the degrees of freedom. This is achieved without the need
of uniqueness of the Lasso solution. Thus, our result holds true for both the
underdetermined and the overdetermined case, where the latter was originally
studied in \cite{zou}. We also show, by providing a simple counterexample, that
although the \dof theorem of \cite{zou} is correct, their proof contains a
flaw since their divergence formula holds on a different set of a full measure
than the one that they claim. An effective estimator of the number of degrees
of freedom may have several applications including an objectively guided choice
of the regularization parameter in the Lasso through the \sure framework. Our
theoretical findings are illustrated through several numerical simulations.Comment: A short version appeared in SPARS'11, June 2011 Previously entitled
"The degrees of freedom of penalized l1 minimization
Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints
The Tikhonov regularization of linear ill-posed problems with an
penalty is considered. We recall results for linear convergence rates and
results on exact recovery of the support. Moreover, we derive conditions for
exact support recovery which are especially applicable in the case of ill-posed
problems, where other conditions, e.g. based on the so-called coherence or the
restricted isometry property are usually not applicable. The obtained results
also show that the regularized solutions do not only converge in the
-norm but also in the vector space (when considered as the
strict inductive limit of the spaces as tends to infinity).
Additionally, the relations between different conditions for exact support
recovery and linear convergence rates are investigated.
With an imaging example from digital holography the applicability of the
obtained results is illustrated, i.e. that one may check a priori if the
experimental setup guarantees exact recovery with Tikhonov regularization with
sparsity constraints
Necessary and sufficient conditions of solution uniqueness in minimization
This paper shows that the solutions to various convex minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other models that
either minimize or impose the constraint , where
is a strictly convex function. For these models, this paper proves that,
given a solution and defining I=\supp(x^*) and s=\sign(x^*_I),
is the unique solution if and only if has full column rank and there
exists such that and for . This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on . Indeed, it is also necessary, and
applies to a variety of other models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion
The orthogonal matching pursuit (OMP) is an algorithm to solve sparse
approximation problems. Sufficient conditions for exact recovery are known with
and without noise. In this paper we investigate the applicability of the OMP
for the solution of ill-posed inverse problems in general and in particular for
two deconvolution examples from mass spectrometry and digital holography
respectively.
In sparse approximation problems one often has to deal with the problem of
redundancy of a dictionary, i.e. the atoms are not linearly independent.
However, one expects them to be approximatively orthogonal and this is
quantified by the so-called incoherence. This idea cannot be transfered to
ill-posed inverse problems since here the atoms are typically far from
orthogonal: The ill-posedness of the operator causes that the correlation of
two distinct atoms probably gets huge, i.e. that two atoms can look much alike.
Therefore one needs conditions which take the structure of the problem into
account and work without the concept of coherence. In this paper we develop
results for exact recovery of the support of noisy signals. In the two examples
in mass spectrometry and digital holography we show that our results lead to
practically relevant estimates such that one may check a priori if the
experimental setup guarantees exact deconvolution with OMP. Especially in the
example from digital holography our analysis may be regarded as a first step to
calculate the resolution power of droplet holography
Matrix-free interior point method for compressed sensing problems
We consider a class of optimization problems for sparse signal reconstruction
which arise in the field of Compressed Sensing (CS). A plethora of approaches
and solvers exist for such problems, for example GPSR, FPC AS, SPGL1, NestA,
\ell_{1}_\ell_{s}, PDCO to mention a few. Compressed Sensing applications
lead to very well conditioned optimization problems and therefore can be solved
easily by simple first-order methods. Interior point methods (IPMs) rely on the
Newton method hence they use the second-order information. They have numerous
advantageous features and one clear drawback: being the second-order approach
they need to solve linear equations and this operation has (in the general
dense case) an computational complexity. Attempts have been made to
specialize IPMs to sparse reconstruction problems and they have led to
interesting developments implemented in and PDCO softwares. We
go a few steps further. First, we use the matrix-free interior point method, an
approach which redesigns IPM to avoid the need to explicitly formulate (and
store) the Newton equation systems. Secondly, we exploit the special features
of the signal processing matrices within the matrix-free IPM. Two such features
are of particular interest: an excellent conditioning of these matrices and the
ability to perform inexpensive (low complexity) matrix-vector multiplications
with them. Computational experience with large scale one-dimensional signals
confirms that the new approach is efficient and offers an attractive
alternative to other state-of-the-art solvers
Une exploration numérique des performances de l'échantillonage compressé
Cet article explore numériquement l'efficacité de la minimisation \lun pour la restauration de signaux parcimonieux depuis des mesures compressibles, dans le cas sans bruit. Nous proposons un algorithme glouton qui calcule des vecteurs parcimonieux difficile à retrouver par minimisation \lun. Cet algorithme est inspiré par des critères topologiques d'identifiabilité \lun. Nous évaluons numériquement l'analyse théorique sans avoir à utiliser un échantillonnage de Monte-Carlo, qui tend à évider les cas pathologiques. Ceci permet de mettre à l'épreuve les critères d'identifiabilité exploitant des projections de polytopes et des propriétés d'isométrie restreinte.Adaptivité pour la représentation des images naturelles et des texture
A Numerical Exploration of Compressed Sampling Recovery
This paper explores numerically the efficiency of \lun minimization for the recovery of sparse signals from compressed sampling measurements in the noiseless case. Inspired by topological criteria for \lun-identifiability, a greedy algorithm computes sparse vectors that are difficult to recover by -minimization. We evaluate numerically the theoretical analysis without resorting to Monte-Carlo sampling, which tends to avoid worst case scenarios. This allows one to challenge sparse recovery conditions based on polytope projection and on the restricted isometry property.Adaptivité pour la représentation des images naturelles et des texture
Challenging Restricted Isometry Constants with Greedy Pursuit
This paper proposes greedy numerical schemes to compute lower bounds of the restricted isometry constants that are central in compressed sensing theory. Matrices with small restricted isometry constants enable stable recovery from a small set of random linear measurements. We challenge this compressed sampling recovery using greedy pursuit algorithms that detect ill-conditionned sub-matrices. It turns out that these sub-matrices have large isometry constants and hinder the performance of compressed sensing recovery