409 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization
In this work, we consider multitask learning problems where clusters of nodes
are interested in estimating their own parameter vector. Cooperation among
clusters is beneficial when the optimal models of adjacent clusters have a good
number of similar entries. We propose a fully distributed algorithm for solving
this problem. The approach relies on minimizing a global mean-square error
criterion regularized by non-differentiable terms to promote cooperation among
neighboring clusters. A general diffusion forward-backward splitting strategy
is introduced. Then, it is specialized to the case of sparsity promoting
regularizers. A closed-form expression for the proximal operator of a weighted
sum of -norms is derived to achieve higher efficiency. We also provide
conditions on the step-sizes that ensure convergence of the algorithm in the
mean and mean-square error sense. Simulations are conducted to illustrate the
effectiveness of the strategy
Structured Sparse Approximation via Generalized Regularizers: With Application to V2V Channel Estimation
In this paper, we consider the estimation of a signal that has both group- and element-wise sparsity (joint sparsity); motivated by channel estimation in vehicle-to-vehicle channels. A general approach for the design of separable regularizing functions is proposed to adaptively induce sparsity in the estimation. A joint sparse signal estimation problem is formulated via these regularizers and its optimal solution is computed based on proximity operations. Our optimization results are quite general and they can be applied in the context of hierarchical sparsity models as well. The proposed recovery algorithm is a nested iterative method based on the alternating direction method of multipliers (ADMM). Due to regularizer separability, key operations can be performed in parallel. V2V channels are estimated by exploiting the joint sparsity (group/element-wise) exhibited in the delay-Doppler domain. Simulation results reveal that the proposed method can achieve as much as a 10 dB gain over previously examined methods
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
- …