203 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Model Selection with Low Complexity Priors
Regularization plays a pivotal role when facing the challenge of solving
ill-posed inverse problems, where the number of observations is smaller than
the ambient dimension of the object to be estimated. A line of recent work has
studied regularization models with various types of low-dimensional structures.
In such settings, the general approach is to solve a regularized optimization
problem, which combines a data fidelity term and some regularization penalty
that promotes the assumed low-dimensional/simple structure. This paper provides
a general framework to capture this low-dimensional structure through what we
coin partly smooth functions relative to a linear manifold. These are convex,
non-negative, closed and finite-valued functions that will promote objects
living on low-dimensional subspaces. This class of regularizers encompasses
many popular examples such as the L1 norm, L1-L2 norm (group sparsity), as well
as several others including the Linfty norm. We also show that the set of
partly smooth functions relative to a linear manifold is closed under addition
and pre-composition by a linear operator, which allows to cover mixed
regularization, and the so-called analysis-type priors (e.g. total variation,
fused Lasso, finite-valued polyhedral gauges). Our main result presents a
unified sharp analysis of exact and robust recovery of the low-dimensional
subspace model associated to the object to recover from partial measurements.
This analysis is illustrated on a number of special and previously studied
cases, and on an analysis of the performance of Linfty regularization in a
compressed sensing scenario
Activity Identification and Local Linear Convergence of Forward--Backward-type methods
In this paper, we consider a class of Forward--Backward (FB) splitting
methods that includes several variants (e.g. inertial schemes, FISTA) for
minimizing the sum of two proper convex and lower semi-continuous functions,
one of which has a Lipschitz continuous gradient, and the other is partly
smooth relatively to a smooth active manifold . We propose a
unified framework, under which we show that, this class of FB-type algorithms
(i) correctly identifies the active manifolds in a finite number of iterations
(finite activity identification), and (ii) then enters a local linear
convergence regime, which we characterize precisely in terms of the structure
of the underlying active manifolds. For simpler problems involving polyhedral
functions, we show finite termination. We also establish and explain why FISTA
(with convergent sequences) locally oscillates and can be slower than FB. These
results may have numerous applications including in signal/image processing,
sparse recovery and machine learning. Indeed, the obtained results explain the
typical behaviour that has been observed numerically for many problems in these
fields such as the Lasso, the group Lasso, the fused Lasso and the nuclear norm
regularization to name only a few.Comment: Full length version of the previous short on
Statistical Properties of Convex Clustering
In this manuscript, we study the statistical properties of convex clustering.
We establish that convex clustering is closely related to single linkage
hierarchical clustering and -means clustering. In addition, we derive the
range of tuning parameter for convex clustering that yields a non-trivial
solution. We also provide an unbiased estimate of the degrees of freedom, and
provide a finite sample bound for the prediction error for convex clustering.
We compare convex clustering to some traditional clustering methods in
simulation studies.Comment: 20 pages, 5 figure
- …