302 research outputs found
Lagrangian-dual functions and Moreau-Yosida regularization
10.1137/060673746SIAM Journal on Optimization19139-6
Lagrange optimality system for a class of nonsmooth convex optimization
In this paper, we revisit the augmented Lagrangian method for a class of
nonsmooth convex optimization. We present the Lagrange optimality system of the
augmented Lagrangian associated with the problems, and establish its
connections with the standard optimality condition and the saddle point
condition of the augmented Lagrangian, which provides a powerful tool for
developing numerical algorithms. We apply a linear Newton method to the
Lagrange optimality system to obtain a novel algorithm applicable to a variety
of nonsmooth convex optimization problems arising in practical applications.
Under suitable conditions, we prove the nonsingularity of the Newton system and
the local convergence of the algorithm.Comment: 19 page
Optimality conditions and Moreau--Yosida regularization for almost sure state constraints
We analyze a potentially risk-averse convex stochastic optimization problem, where the control is deterministic and the state is a Banach-valued essentially bounded random variable. We obtain strong forms of necessary and sufficient optimality conditions for problems subject to equality and conical constraints. We propose a Moreau--Yosida regularization for the conical constraint and show consistency of the optimality conditions for the regularized problem as the regularization parameter is taken to infinity
Projection methods in conic optimization
There exist efficient algorithms to project a point onto the intersection of
a convex cone and an affine subspace. Those conic projections are in turn the
work-horse of a range of algorithms in conic optimization, having a variety of
applications in science, finance and engineering. This chapter reviews some of
these algorithms, emphasizing the so-called regularization algorithms for
linear conic optimization, and applications in polynomial optimization. This is
a presentation of the material of several recent research articles; we aim here
at clarifying the ideas, presenting them in a general framework, and pointing
out important techniques
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS
We develop sufficient optimality conditions for a Moreau-Yosidaregularized optimal control problem governed by a semilinear ellipticPDE with pointwise constraints on the state and the control. We makeuse of the equivalence of a setting of Moreau-Yosida regularization to a special setting of the virtual control concept,for which standard second order sufficient conditions have been shown. Moreover, we present a numerical example,solving a Moreau-Yosida regularized model problem with an SQP method
- …