3,355 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration
Stochastic approximation techniques have been used in various contexts in
data science. We propose a stochastic version of the forward-backward algorithm
for minimizing the sum of two convex functions, one of which is not necessarily
smooth. Our framework can handle stochastic approximations of the gradient of
the smooth function and allows for stochastic errors in the evaluation of the
proximity operator of the nonsmooth function. The almost sure convergence of
the iterates generated by the algorithm to a minimizer is established under
relatively mild assumptions. We also propose a stochastic version of a popular
primal-dual proximal splitting algorithm, establish its convergence, and apply
it to an online image restoration problem.Comment: 5 Figure
Fast and easy blind deblurring using an inverse filter and PROBE
PROBE (Progressive Removal of Blur Residual) is a recursive framework for
blind deblurring. Using the elementary modified inverse filter at its core,
PROBE's experimental performance meets or exceeds the state of the art, both
visually and quantitatively. Remarkably, PROBE lends itself to analysis that
reveals its convergence properties. PROBE is motivated by recent ideas on
progressive blind deblurring, but breaks away from previous research by its
simplicity, speed, performance and potential for analysis. PROBE is neither a
functional minimization approach, nor an open-loop sequential method (blur
kernel estimation followed by non-blind deblurring). PROBE is a feedback
scheme, deriving its unique strength from the closed-loop architecture rather
than from the accuracy of its algorithmic components
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
Undersampled Phase Retrieval with Outliers
We propose a general framework for reconstructing transform-sparse images
from undersampled (squared)-magnitude data corrupted with outliers. This
framework is implemented using a multi-layered approach, combining multiple
initializations (to address the nonconvexity of the phase retrieval problem),
repeated minimization of a convex majorizer (surrogate for a nonconvex
objective function), and iterative optimization using the alternating
directions method of multipliers. Exploiting the generality of this framework,
we investigate using a Laplace measurement noise model better adapted to
outliers present in the data than the conventional Gaussian noise model. Using
simulations, we explore the sensitivity of the method to both the
regularization and penalty parameters. We include 1D Monte Carlo and 2D image
reconstruction comparisons with alternative phase retrieval algorithms. The
results suggest the proposed method, with the Laplace noise model, both
increases the likelihood of correct support recovery and reduces the mean
squared error from measurements containing outliers. We also describe exciting
extensions made possible by the generality of the proposed framework, including
regularization using analysis-form sparsity priors that are incompatible with
many existing approaches.Comment: 11 pages, 9 figure
Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation
In this paper, we aim at recovering an unknown signal x0 from noisy
L1measurements y=Phi*x0+w, where Phi is an ill-conditioned or singular linear
operator and w accounts for some noise. To regularize such an ill-posed inverse
problem, we impose an analysis sparsity prior. More precisely, the recovery is
cast as a convex optimization program where the objective is the sum of a
quadratic data fidelity term and a regularization term formed of the L1-norm of
the correlations between the sought after signal and atoms in a given
(generally overcomplete) dictionary. The L1-sparsity analysis prior is weighted
by a regularization parameter lambda>0. In this paper, we prove that any
minimizers of this problem is a piecewise-affine function of the observations y
and the regularization parameter lambda. As a byproduct, we exploit these
properties to get an objectively guided choice of lambda. In particular, we
develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE)
and show that it is an unbiased and reliable estimator of an appropriately
defined risk. The latter encompasses special cases such as the prediction risk,
the projection risk and the estimation risk. We apply these risk estimators to
the special case of L1-sparsity analysis regularization. We also discuss
implementation issues and propose fast algorithms to solve the L1 analysis
minimization problem and to compute the associated GSURE. We finally illustrate
the applicability of our framework to parameter(s) selection on several imaging
problems
- …