134 research outputs found
A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction
We develop an implementable stochastic proximal point (SPP) method for a
class of weakly convex, composite optimization problems. The proposed
stochastic proximal point algorithm incorporates a variance reduction mechanism
and the resulting SPP updates are solved using an inexact semismooth Newton
framework. We establish detailed convergence results that take the inexactness
of the SPP steps into account and that are in accordance with existing
convergence guarantees of (proximal) stochastic variance-reduced gradient
methods. Numerical experiments show that the proposed algorithm competes
favorably with other state-of-the-art methods and achieves higher robustness
with respect to the step size selection
On the local convergence of the semismooth Newton method for composite optimization
Existing superlinear convergence rate of the semismooth Newton method relies
on the nonsingularity of the B-Jacobian. This is a strict condition since it
implies that the stationary point to seek is isolated. In this paper, we
consider a large class of nonlinear equations derived from first-order type
methods for solving composite optimization problems. We first present some
equivalent characterizations of the invertibility of the associated B-Jacobian,
providing easy-to-check criteria for the traditional condition. Secondly, we
prove that the strict complementarity and local error bound condition guarantee
a local superlinear convergence rate. The analysis consists of two steps:
showing local smoothness based on partial smoothness or closedness of the set
of nondifferentiable points of the proximal map, and applying the local error
bound condition to the locally smooth nonlinear equations. Concrete examples
satisfying the required assumptions are presented. The main novelty of the
proposed condition is that it also applies to nonisolated stationary points.Comment: 25 page
Deflation for semismooth equations
Variational inequalities can in general support distinct solutions. In this
paper we study an algorithm for computing distinct solutions of a variational
inequality, without varying the initial guess supplied to the solver. The
central idea is the combination of a semismooth Newton method with a deflation
operator that eliminates known solutions from consideration. Given one root of
a semismooth residual, deflation constructs a new problem for which a
semismooth Newton method will not converge to the known root, even from the
same initial guess. This enables the discovery of other roots. We prove the
effectiveness of the deflation technique under the same assumptions that
guarantee locally superlinear convergence of a semismooth Newton method. We
demonstrate its utility on various finite- and infinite-dimensional examples
drawn from constrained optimization, game theory, economics and solid
mechanics.Comment: 24 pages, 3 figure
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
A Nonsmooth Augmented Lagrangian Method and its Application to Poisson Denoising and Sparse Control
In this paper, fully nonsmooth optimization problems in Banach spaces with
finitely many inequality constraints, an equality constraint within a Hilbert
space framework, and an additional abstract constraint are considered. First,
we suggest a (safeguarded) augmented Lagrangian method for the numerical
solution of such problems and provide a derivative-free global convergence
theory which applies in situations where the appearing subproblems can be
solved to approximate global minimality. Exemplary, the latter is possible in a
fully convex setting. As we do not rely on any tool of generalized
differentiation, the results are obtained under minimal continuity assumptions
on the data functions. We then consider two prominent and difficult
applications from image denoising and sparse optimal control where these
findings can be applied in a beneficial way. These two applications are
discussed and investigated in some detail. Due to the different nature of the
two applications, their numerical solution by the (safeguarded) augmented
Lagrangian approach requires problem-tailored techniques to compute approximate
minima of the resulting subproblems. The corresponding methods are discussed,
and numerical results visualize our theoretical findings.Comment: 36 pages, 4 figures, 1 tabl
- …