24 research outputs found
Generalized Forward-Backward Splitting
This paper introduces the generalized forward-backward splitting algorithm
for minimizing convex functions of the form , where
has a Lipschitz-continuous gradient and the 's are simple in the sense
that their Moreau proximity operators are easy to compute. While the
forward-backward algorithm cannot deal with more than non-smooth
function, our method generalizes it to the case of arbitrary . Our method
makes an explicit use of the regularity of in the forward step, and the
proximity operators of the 's are applied in parallel in the backward
step. This allows the generalized forward backward to efficiently address an
important class of convex problems. We prove its convergence in infinite
dimension, and its robustness to errors on the computation of the proximity
operators and of the gradient of . Examples on inverse problems in imaging
demonstrate the advantage of the proposed methods in comparison to other
splitting algorithms.Comment: 24 pages, 4 figure
Introduction to Nonsmooth Analysis and Optimization
This book aims to give an introduction to generalized derivative concepts
useful in deriving necessary optimality conditions and numerical algorithms for
infinite-dimensional nondifferentiable optimization problems that arise in
inverse problems, imaging, and PDE-constrained optimization. They cover convex
subdifferentials, Fenchel duality, monotone operators and resolvents,
Moreau--Yosida regularization as well as Clarke and (briefly) limiting
subdifferentials. Both first-order (proximal point and splitting) methods and
second-order (semismooth Newton) methods are treated. In addition,
differentiation of set-valued mapping is discussed and used for deriving
second-order optimality conditions for as well as Lipschitz stability
properties of minimizers. The required background from functional analysis and
calculus of variations is also briefly summarized.Comment: arXiv admin note: substantial text overlap with arXiv:1708.0418
Distributed Convex Optimisation using the Alternating Direction Method of Multipliers (ADMM) in Lossy Scenarios
The Alternating Direction Method of Multipliers (ADMM) is an extensively studied algorithm suitable for solving convex distributed optimisation problems. This Thesis presents a formulation of the ADMM that is guaranteed to converge if the communications among agents are faulty and the agents perform updates asynchronously. With strongly convex costs, the proposed algorithm is shown to converge exponentially fast. The further extension to partition-based problems is presented
Adaptive proximal algorithms for convex optimization under local Lipschitz continuity of the gradient
Backtracking linesearch is the de facto approach for minimizing continuously
differentiable functions with locally Lipschitz gradient. In recent years, it
has been shown that in the convex setting it is possible to avoid linesearch
altogether, and to allow the stepsize to adapt based on a local smoothness
estimate without any backtracks or evaluations of the function value. In this
work we propose an adaptive proximal gradient method, adaPG, that uses novel
estimates of the local smoothness modulus which leads to less conservative
stepsize updates and that can additionally cope with nonsmooth terms. This idea
is extended to the primal-dual setting where an adaptive three term primal-dual
algorithm, adaPD, is proposed which can be viewed as an extension of the PDHG
method. Moreover, in this setting the ``essentially'' fully adaptive variant
adaPD is proposed that avoids evaluating the linear operator norm by
invoking a backtracking procedure, that, remarkably, does not require extra
gradient evaluations. Numerical simulations demonstrate the effectiveness of
the proposed algorithms compared to the state of the art
New strong convergence method for the sum of two maximal monotone operators
This paper aims to obtain a strong convergence result for a Douglas–Rachford splitting method with inertial extrapolation step for finding a zero of the sum of two set-valued maximal monotone operators without any further assumption of uniform monotonicity on any of the involved maximal monotone operators. Furthermore, our proposed method is easy to implement and the inertial factor in our proposed method is a natural choice. Our method of proof is of independent interest. Finally, some numerical implementations are given to confirm the theoretical analysis
Numerical splitting methods for nonsmooth convex optimization problems
In this thesis, we develop and investigate numerical methods for solving nonsmooth convex optimization problems in real Hilbert spaces. We construct algorithms, such that they handle the terms in the objective function and constraints of the minimization problems separately, which makes these methods simpler to compute. In the first part of the thesis, we extend the well known AMA method from Tseng to the Proximal AMA algorithm by introducing variable metrics in the subproblems of the primal-dual algorithm. For a special choice of metrics, the subproblems become proximal steps. Thus, for objectives in a lot of important applications, such as signal and image processing, machine learning or statistics, the iteration process consists of expressions in closed form that are easy to calculate. In the further course of the thesis, we intensify the investigation on this algorithm by considering and studying a dynamical system. Through explicit time discretization of this system, we obtain Proximal AMA. We show the existence and uniqueness of strong global solutions of the dynamical system and prove that its trajectories converge to the primal-dual solution of the considered optimization problem. In the last part of this thesis, we minimize a sum of finitely many nonsmooth convex functions (each can be composed by a linear operator) over a nonempty, closed and convex set by smoothing these functions. We consider a stochastic algorithm in which we take gradient steps of the smoothed functions (which are proximal steps if we smooth by Moreau envelope), and use a mirror map to 'mirror'' the iterates onto the feasible set. In applications, we compare them to similar methods and discuss the advantages and practical usability of these new algorithms