10,723 research outputs found
A totally unimodular view of structured sparsity
This paper describes a simple framework for structured sparse recovery based
on convex optimization. We show that many structured sparsity models can be
naturally represented by linear matrix inequalities on the support of the
unknown parameters, where the constraint matrix has a totally unimodular (TU)
structure. For such structured models, tight convex relaxations can be obtained
in polynomial time via linear programming. Our modeling framework unifies the
prevalent structured sparsity norms in the literature, introduces new
interesting ones, and renders their tightness and tractability arguments
transparent
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Simultaneously Structured Models with Application to Sparse and Low-rank Matrices
The topic of recovery of a structured model given a small number of linear
observations has been well-studied in recent years. Examples include recovering
sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and
low-rank matrices, among others. In various applications in signal processing
and machine learning, the model of interest is known to be structured in
several ways at the same time, for example, a matrix that is simultaneously
sparse and low-rank.
Often norms that promote each individual structure are known, and allow for
recovery using an order-wise optimal number of measurements (e.g.,
norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to
minimize a combination of such norms. We show that, surprisingly, if we use
multi-objective optimization with these norms, then we can do no better,
order-wise, than an algorithm that exploits only one of the present structures.
This result suggests that to fully exploit the multiple structures, we need an
entirely new convex relaxation, i.e. not one that is a function of the convex
relaxations used for each structure. We then specialize our results to the case
of sparse and low-rank matrices. We show that a nonconvex formulation of the
problem can recover the model from very few measurements, which is on the order
of the degrees of freedom of the matrix, whereas the convex problem obtained
from a combination of the and nuclear norms requires many more
measurements. This proves an order-wise gap between the performance of the
convex and nonconvex recovery problems in this case. Our framework applies to
arbitrary structure-inducing norms as well as to a wide range of measurement
ensembles. This allows us to give performance bounds for problems such as
sparse phase retrieval and low-rank tensor completion.Comment: 38 pages, 9 figure
Measure What Should be Measured: Progress and Challenges in Compressive Sensing
Is compressive sensing overrated? Or can it live up to our expectations? What
will come after compressive sensing and sparsity? And what has Galileo Galilei
got to do with it? Compressive sensing has taken the signal processing
community by storm. A large corpus of research devoted to the theory and
numerics of compressive sensing has been published in the last few years.
Moreover, compressive sensing has inspired and initiated intriguing new
research directions, such as matrix completion. Potential new applications
emerge at a dazzling rate. Yet some important theoretical questions remain
open, and seemingly obvious applications keep escaping the grip of compressive
sensing. In this paper I discuss some of the recent progress in compressive
sensing and point out key challenges and opportunities as the area of
compressive sensing and sparse representations keeps evolving. I also attempt
to assess the long-term impact of compressive sensing
- …