74 research outputs found
A stochastic preconditioned Douglas-Rachford splitting method for saddle-point problems
In this article, we propose and study a stochastic preconditioned
Douglas-Rachford splitting method to solve saddle-point problems which have
separable dual variables. We prove the almost sure convergence of the iteration
sequences in Hilbert spaces for a class of convexconcave and nonsmooth
saddle-point problems. We also provide the sublinear convergence rate for the
ergodic sequence with respect to the expectation of the restricted primal-dual
gap functions. Numerical experiments show the high efficiency of the proposed
stochastic preconditioned Douglas-Rachford splitting methods
An inertial forward-backward algorithm for monotone inclusions
In this paper, we propose an inertial forward backward splitting algorithm to
compute a zero of the sum of two monotone operators, with one of the two
operators being co-coercive. The algorithm is inspired by the accelerated
gradient method of Nesterov, but can be applied to a much larger class of
problems including convex-concave saddle point problems and general monotone
inclusions. We prove convergence of the algorithm in a Hilbert space setting
and show that several recently proposed first-order methods can be obtained as
special cases of the general algorithm. Numerical results show that the
proposed algorithm converges faster than existing methods, while keeping the
computational cost of each iteration basically unchanged.Comment: The final publication is available at http://link.springer.co
Recommended from our members
Operator Splitting Methods for Convex and Nonconvex Optimization
This dissertation focuses on a family of optimization methods called operator splitting methods. They solve complicated problems by decomposing the problem structure into simpler pieces and make progress on each of them separately. Over the past two decades, there has been a resurgence of interests in these methods as the demand for solving structured large-scale problems grew. One of the major challenges for splitting methods is their sensitivity to ill-conditioning, which often makes them struggle to achieve a high order of accuracy. Furthermore, their classical analyses are restricted to the nice settings where solutions do exist, and everything is convex. Much less is known when either of these assumptions breaks down.This work aims to address the issues above. Specifically, we propose a novel acceleration technique called inexact preconditioning, which exploits second-order information at relatively low computation cost. We also show that certain splitting methods still work on problems without solutions, in the sense that their iterates provide information on what goes wrong and how to fix. Finally, for nonconvex problems with saddle points, we show that almost surely, splitting methods will only converge to the local minimums under certain assumptions
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
First-order primal-dual methods for nonsmooth nonconvex optimisation
We provide an overview of primal-dual algorithms for nonsmooth and
non-convex-concave saddle-point problems. This flows around a new analysis of
such methods, using Bregman divergences to formulate simplified conditions for
convergence
- …