61,373 research outputs found
Online Automated Synthesis of Compact Normative Systems
Peer reviewedPostprin
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
A quasi-Newton proximal splitting method
A new result in convex analysis on the calculation of proximity operators in
certain scaled norms is derived. We describe efficient implementations of the
proximity calculation for a useful class of functions; the implementations
exploit the piece-wise linear nature of the dual problem. The second part of
the paper applies the previous result to acceleration of convex minimization
problems, and leads to an elegant quasi-Newton method. The optimization method
compares favorably against state-of-the-art alternatives. The algorithm has
extensive applications including signal processing, sparse recovery and machine
learning and classification
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
Improving compressed sensing with the diamond norm
In low-rank matrix recovery, one aims to reconstruct a low-rank matrix from a
minimal number of linear measurements. Within the paradigm of compressed
sensing, this is made computationally efficient by minimizing the nuclear norm
as a convex surrogate for rank.
In this work, we identify an improved regularizer based on the so-called
diamond norm, a concept imported from quantum information theory. We show that
-for a class of matrices saturating a certain norm inequality- the descent cone
of the diamond norm is contained in that of the nuclear norm. This suggests
superior reconstruction properties for these matrices. We explicitly
characterize this set of matrices. Moreover, we demonstrate numerically that
the diamond norm indeed outperforms the nuclear norm in a number of relevant
applications: These include signal analysis tasks such as blind matrix
deconvolution or the retrieval of certain unitary basis changes, as well as the
quantum information problem of process tomography with random measurements.
The diamond norm is defined for matrices that can be interpreted as order-4
tensors and it turns out that the above condition depends crucially on that
tensorial structure. In this sense, this work touches on an aspect of the
notoriously difficult tensor completion problem.Comment: 25 pages + Appendix, 7 Figures, published versio
- …