356 research outputs found
A Primal-Dual Algorithmic Framework for Constrained Convex Minimization
We present a primal-dual algorithmic framework to obtain approximate
solutions to a prototypical constrained convex optimization problem, and
rigorously characterize how common structural assumptions affect the numerical
efficiency. Our main analysis technique provides a fresh perspective on
Nesterov's excessive gap technique in a structured fashion and unifies it with
smoothing and primal-dual methods. For instance, through the choices of a dual
smoothing strategy and a center point, our framework subsumes decomposition
algorithms, augmented Lagrangian as well as the alternating direction
method-of-multipliers methods as its special cases, and provides optimal
convergence rates on the primal objective residual as well as the primal
feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure
Shape Constrained Regularisation by Statistical Multiresolution for Inverse Problems: Asymptotic Analysis
This paper is concerned with a novel regularisation technique for solving
linear ill-posed operator equations in Hilbert spaces from data that is
corrupted by white noise. We combine convex penalty functionals with
extreme-value statistics of projections of the residuals on a given set of
sub-spaces in the image-space of the operator. We prove general consistency and
convergence rate results in the framework of Bregman-divergences which allows
for a vast range of penalty functionals. Various examples that indicate the
applicability of our approach will be discussed. We will illustrate in the
context of signal and image processing that the presented method constitutes a
locally adaptive reconstruction method
Entropic Wasserstein Gradient Flows
This article details a novel numerical scheme to approximate gradient flows
for optimal transport (i.e. Wasserstein) metrics. These flows have proved
useful to tackle theoretically and numerically non-linear diffusion equations
that model for instance porous media or crowd evolutions. These gradient flows
define a suitable notion of weak solutions for these evolutions and they can be
approximated in a stable way using discrete flows. These discrete flows are
implicit Euler time stepping according to the Wasserstein metric. A bottleneck
of these approaches is the high computational load induced by the resolution of
each step. Indeed, this corresponds to the resolution of a convex optimization
problem involving a Wasserstein distance to the previous iterate. Following
several recent works on the approximation of Wasserstein distances, we consider
a discrete flow induced by an entropic regularization of the transportation
coupling. This entropic regularization allows one to trade the initial
Wasserstein fidelity term for a Kulback-Leibler divergence, which is easier to
deal with numerically. We show how KL proximal schemes, and in particular
Dykstra's algorithm, can be used to compute each step of the regularized flow.
The resulting algorithm is both fast, parallelizable and versatile, because it
only requires multiplications by a Gibbs kernel. On Euclidean domains
discretized on an uniform grid, this corresponds to a linear filtering (for
instance a Gaussian filtering when is the squared Euclidean distance) which
can be computed in nearly linear time. On more general domains, such as
(possibly non-convex) shapes or on manifolds discretized by a triangular mesh,
following a recently proposed numerical scheme for optimal transport, this
Gibbs kernel multiplication is approximated by a short-time heat diffusion
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
- …