7,874 research outputs found
Model Selection with Low Complexity Priors
Regularization plays a pivotal role when facing the challenge of solving
ill-posed inverse problems, where the number of observations is smaller than
the ambient dimension of the object to be estimated. A line of recent work has
studied regularization models with various types of low-dimensional structures.
In such settings, the general approach is to solve a regularized optimization
problem, which combines a data fidelity term and some regularization penalty
that promotes the assumed low-dimensional/simple structure. This paper provides
a general framework to capture this low-dimensional structure through what we
coin partly smooth functions relative to a linear manifold. These are convex,
non-negative, closed and finite-valued functions that will promote objects
living on low-dimensional subspaces. This class of regularizers encompasses
many popular examples such as the L1 norm, L1-L2 norm (group sparsity), as well
as several others including the Linfty norm. We also show that the set of
partly smooth functions relative to a linear manifold is closed under addition
and pre-composition by a linear operator, which allows to cover mixed
regularization, and the so-called analysis-type priors (e.g. total variation,
fused Lasso, finite-valued polyhedral gauges). Our main result presents a
unified sharp analysis of exact and robust recovery of the low-dimensional
subspace model associated to the object to recover from partial measurements.
This analysis is illustrated on a number of special and previously studied
cases, and on an analysis of the performance of Linfty regularization in a
compressed sensing scenario
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Model Consistency of Partly Smooth Regularizers
This paper studies least-square regression penalized with partly smooth
convex regularizers. This class of functions is very large and versatile
allowing to promote solutions conforming to some notion of low-complexity.
Indeed, they force solutions of variational problems to belong to a
low-dimensional manifold (the so-called model) which is stable under small
perturbations of the function. This property is crucial to make the underlying
low-complexity model robust to small noise. We show that a generalized
"irrepresentable condition" implies stable model selection under small noise
perturbations in the observations and the design matrix, when the
regularization parameter is tuned proportionally to the noise level. This
condition is shown to be almost a necessary condition. We then show that this
condition implies model consistency of the regularized estimator. That is, with
a probability tending to one as the number of measurements increases, the
regularized estimator belongs to the correct low-dimensional model manifold.
This work unifies and generalizes several previous ones, where model
consistency is known to hold for sparse, group sparse, total variation and
low-rank regularizations
Sensitivity Analysis for Mirror-Stratifiable Convex Functions
This paper provides a set of sensitivity analysis and activity identification
results for a class of convex functions with a strong geometric structure, that
we coined "mirror-stratifiable". These functions are such that there is a
bijection between a primal and a dual stratification of the space into
partitioning sets, called strata. This pairing is crucial to track the strata
that are identifiable by solutions of parametrized optimization problems or by
iterates of optimization algorithms. This class of functions encompasses all
regularizers routinely used in signal and image processing, machine learning,
and statistics. We show that this "mirror-stratifiable" structure enjoys a nice
sensitivity theory, allowing us to study stability of solutions of optimization
problems to small perturbations, as well as activity identification of
first-order proximal splitting-type algorithms. Existing results in the
literature typically assume that, under a non-degeneracy condition, the active
set associated to a minimizer is stable to small perturbations and is
identified in finite time by optimization schemes. In contrast, our results do
not require any non-degeneracy assumption: in consequence, the optimal active
set is not necessarily stable anymore, but we are able to track precisely the
set of identifiable strata.We show that these results have crucial implications
when solving challenging ill-posed inverse problems via regularization, a
typical scenario where the non-degeneracy condition is not fulfilled. Our
theoretical results, illustrated by numerical simulations, allow to
characterize the instability behaviour of the regularized solutions, by
locating the set of all low-dimensional strata that can be potentially
identified by these solutions
A multi-level preconditioned Krylov method for the efficient solution of algebraic tomographic reconstruction problems
Classical iterative methods for tomographic reconstruction include the class
of Algebraic Reconstruction Techniques (ART). Convergence of these stationary
linear iterative methods is however notably slow. In this paper we propose the
use of Krylov solvers for tomographic linear inversion problems. These advanced
iterative methods feature fast convergence at the expense of a higher
computational cost per iteration, causing them to be generally uncompetitive
without the inclusion of a suitable preconditioner. Combining elements from
standard multigrid (MG) solvers and the theory of wavelets, a novel
wavelet-based multi-level (WMG) preconditioner is introduced, which is shown to
significantly speed-up Krylov convergence. The performance of the
WMG-preconditioned Krylov method is analyzed through a spectral analysis, and
the approach is compared to existing methods like the classical Simultaneous
Iterative Reconstruction Technique (SIRT) and unpreconditioned Krylov methods
on a 2D tomographic benchmark problem. Numerical experiments are promising,
showing the method to be competitive with the classical Algebraic
Reconstruction Techniques in terms of convergence speed and overall performance
(CPU time) as well as precision of the reconstruction.Comment: Journal of Computational and Applied Mathematics (2014), 26 pages, 13
figures, 3 table
- …