377 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Set optimization - a rather short introduction
Recent developments in set optimization are surveyed and extended including
various set relations as well as fundamental constructions of a convex analysis
for set- and vector-valued functions, and duality for set optimization
problems. Extensive sections with bibliographical comments summarize the state
of the art. Applications to vector optimization and financial risk measures are
discussed along with algorithmic approaches to set optimization problems
Learning-to-Learn Stochastic Gradient Descent with Biased Regularization
We study the problem of learning-to-learn: inferring a learning algorithm
that works well on tasks sampled from an unknown distribution. As class of
algorithms we consider Stochastic Gradient Descent on the true risk regularized
by the square euclidean distance to a bias vector. We present an average excess
risk bound for such a learning algorithm. This result quantifies the potential
benefit of using a bias vector with respect to the unbiased case. We then
address the problem of estimating the bias from a sequence of tasks. We propose
a meta-algorithm which incrementally updates the bias, as new tasks are
observed. The low space and time complexity of this approach makes it appealing
in practice. We provide guarantees on the learning ability of the
meta-algorithm. A key feature of our results is that, when the number of tasks
grows and their variance is relatively small, our learning-to-learn approach
has a significant advantage over learning each task in isolation by Stochastic
Gradient Descent without a bias term. We report on numerical experiments which
demonstrate the effectiveness of our approach.Comment: 37 pages, 8 figure
Capital allocation for set-valued risk measures
We introduce the notion of set-valued Capital Allocation rule, and study Capital allocation principles for multivariate set-valued coherent and convex risk measures. We compare these rules with some of those mostly used for univariate (single-valued) risk measures
A proximal iteration for deconvolving Poisson noisy images using sparse representations
We propose an image deconvolution algorithm when the data is contaminated by
Poisson noise. The image to restore is assumed to be sparsely represented in a
dictionary of waveforms such as the wavelet or curvelet transforms. Our key
contributions are: First, we handle the Poisson noise properly by using the
Anscombe variance stabilizing transform leading to a {\it non-linear}
degradation equation with additive Gaussian noise. Second, the deconvolution
problem is formulated as the minimization of a convex functional with a
data-fidelity term reflecting the noise properties, and a non-smooth
sparsity-promoting penalties over the image representation coefficients (e.g.
-norm). Third, a fast iterative backward-forward splitting algorithm is
proposed to solve the minimization problem. We derive existence and uniqueness
conditions of the solution, and establish convergence of the iterative
algorithm. Finally, a GCV-based model selection procedure is proposed to
objectively select the regularization parameter. Experimental results are
carried out to show the striking benefits gained from taking into account the
Poisson statistics of the noise. These results also suggest that using
sparse-domain regularization may be tractable in many deconvolution
applications with Poisson noise such as astronomy and microscopy
- …