272 research outputs found
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
A Riemannian ADMM
We consider a class of Riemannian optimization problems where the objective
is the sum of a smooth function and a nonsmooth function, considered in the
ambient space. This class of problems finds important applications in machine
learning and statistics such as the sparse principal component analysis, sparse
spectral clustering, and orthogonal dictionary learning. We propose a
Riemannian alternating direction method of multipliers (ADMM) to solve this
class of problems. Our algorithm adopts easily computable steps in each
iteration. The iteration complexity of the proposed algorithm for obtaining an
-stationary point is analyzed under mild assumptions. To the best of
our knowledge, this is the first Riemannian ADMM with provable convergence
guarantee for solving Riemannian optimization problem with nonsmooth objective.
Numerical experiments are conducted to demonstrate the advantage of the
proposed method
Decentralized Weakly Convex Optimization Over the Stiefel Manifold
We focus on a class of non-smooth optimization problems over the Stiefel
manifold in the decentralized setting, where a connected network of agents
cooperatively minimize a finite-sum objective function with each component
being weakly convex in the ambient Euclidean space. Such optimization problems,
albeit frequently encountered in applications, are quite challenging due to
their non-smoothness and non-convexity. To tackle them, we propose an iterative
method called the decentralized Riemannian subgradient method (DRSM). The
global convergence and an iteration complexity of for forcing a natural stationarity measure below
are established via the powerful tool of proximal smoothness from
variational analysis, which could be of independent interest. Besides, we show
the local linear convergence of the DRSM using geometrically diminishing
stepsizes when the problem at hand further possesses a sharpness property.
Numerical experiments are conducted to corroborate our theoretical findings.Comment: 27 pages, 6 figures, 1 tabl
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- …