226 research outputs found
Introduction to Nonsmooth Analysis and Optimization
This book aims to give an introduction to generalized derivative concepts
useful in deriving necessary optimality conditions and numerical algorithms for
infinite-dimensional nondifferentiable optimization problems that arise in
inverse problems, imaging, and PDE-constrained optimization. They cover convex
subdifferentials, Fenchel duality, monotone operators and resolvents,
Moreau--Yosida regularization as well as Clarke and (briefly) limiting
subdifferentials. Both first-order (proximal point and splitting) methods and
second-order (semismooth Newton) methods are treated. In addition,
differentiation of set-valued mapping is discussed and used for deriving
second-order optimality conditions for as well as Lipschitz stability
properties of minimizers. The required background from functional analysis and
calculus of variations is also briefly summarized.Comment: arXiv admin note: substantial text overlap with arXiv:1708.0418
Forward-Half-Reflected-Partial inverse-Backward Splitting Algorithm for Solving Monotone Inclusions
In this article, we proposed a method for numerically solving monotone
inclusions in real Hilbert spaces that involve the sum of a maximally monotone
operator, a monotone-Lipschitzian operator, a cocoercive operator, and a normal
cone to a vector subspace. Our algorithm splits and exploits the intrinsic
properties of each operator involved in the inclusion. The proposed method is
derived by combining partial inverse techniques and the {\it
forward-half-reflected-backward} (FHRB) splitting method proposed by Malitsky
and Tam (2020). Our method inherits the advantages of FHRB, equiring only one
activation of the Lipschitzian operator, one activation of the cocoercive
operator, two projections onto the closed vector subspace, and one calculation
of the resolvent of the maximally monotone operator. Furthermore, we develop a
method for solving primal-dual inclusions involving a mixture of sums, linear
compositions, parallel sums, Lipschitzian operators, cocoercive operators, and
normal cones. We apply our method to constrained composite convex optimization
problems as a specific example. Finally, in order to compare our proposed
method with existing methods in the literature, we provide numerical
experiments on constrained total variation least-squares optimization problems.
The numerical results are promising
- …