838 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Barycentric Subspace Analysis on Manifolds
This paper investigates the generalization of Principal Component Analysis
(PCA) to Riemannian manifolds. We first propose a new and general type of
family of subspaces in manifolds that we call barycentric subspaces. They are
implicitly defined as the locus of points which are weighted means of
reference points. As this definition relies on points and not on tangent
vectors, it can also be extended to geodesic spaces which are not Riemannian.
For instance, in stratified spaces, it naturally allows principal subspaces
that span several strata, which is impossible in previous generalizations of
PCA. We show that barycentric subspaces locally define a submanifold of
dimension k which generalizes geodesic subspaces.Second, we rephrase PCA in
Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy
of properly embedded linear subspaces of increasing dimension). We show that
the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the
subspaces of the flag (AUV). Barycentric subspaces are naturally nested,
allowing the construction of hierarchically nested subspaces. Optimizing the
AUV criterion to optimally approximate data points with flags of affine spans
in Riemannian manifolds lead to a particularly appealing generalization of PCA
on manifolds called Barycentric Subspaces Analysis (BSA).Comment: Annals of Statistics, Institute of Mathematical Statistics, A
Para\^itr
Activity Identification and Local Linear Convergence of Forward--Backward-type methods
In this paper, we consider a class of Forward--Backward (FB) splitting
methods that includes several variants (e.g. inertial schemes, FISTA) for
minimizing the sum of two proper convex and lower semi-continuous functions,
one of which has a Lipschitz continuous gradient, and the other is partly
smooth relatively to a smooth active manifold . We propose a
unified framework, under which we show that, this class of FB-type algorithms
(i) correctly identifies the active manifolds in a finite number of iterations
(finite activity identification), and (ii) then enters a local linear
convergence regime, which we characterize precisely in terms of the structure
of the underlying active manifolds. For simpler problems involving polyhedral
functions, we show finite termination. We also establish and explain why FISTA
(with convergent sequences) locally oscillates and can be slower than FB. These
results may have numerous applications including in signal/image processing,
sparse recovery and machine learning. Indeed, the obtained results explain the
typical behaviour that has been observed numerically for many problems in these
fields such as the Lasso, the group Lasso, the fused Lasso and the nuclear norm
regularization to name only a few.Comment: Full length version of the previous short on
Gradient Young measures generated by quasiconformal maps in the plane
In this contribution, we completely and explicitly characterize Young
measures generated by gradients of quasiconformal maps in the plane. By doing
so, we generalize the results of Astala and Faraco \cite{AstalaFaraco} who
provided a similar result for quasiregular maps and Bene\v{s}ov\'a and
Kru\v{z}\'ik \cite{bbmk2013} who characterized Young measures generated by
gradients of bi-Lipschitz maps. Our results are motivated by non-linear
elasticity where injectivity of the functions in the generating sequence is
essential in order to assure non-interpenetration of matter
A simple proof of the invariant torus theorem
We give a simple proof of Kolmogorov's theorem on the persistence of a
quasiperiodic invariant torus in Hamiltonian systems. The theorem is first
reduced to a well-posed inversion problem (Herman's normal form) by switching
the frequency obstruction from one side of the conjugacy to another. Then the
proof consists in applying a simple, well suited, inverse function theorem in
the analytic category, which itself relies on the Newton algorithm and on
interpolation inequalities. A comparison with other proofs is included in
appendix
- …