76 research outputs found
A second derivative SQP method: local convergence
In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud
\ud
Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud
\ud
Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set
A feasible sequential linear equation method for inequality constrained optimization
2003-2004 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Exact penalty method for D-stationary point of nonlinear optimization
We consider the nonlinear optimization problem with least -norm
measure of constraint violations and introduce the concepts of the D-stationary
point, the DL-stationary point and the DZ-stationary point with the help of
exact penalty function. If the stationary point is feasible, they correspond to
the Fritz-John stationary point, the KKT stationary point and the singular
stationary point, respectively. In order to show the usefulness of the new
stationary points, we propose a new exact penalty sequential quadratic
programming (SQP) method with inner and outer iterations and analyze its global
and local convergence. The proposed method admits convergence to a D-stationary
point and rapid infeasibility detection without driving the penalty parameter
to zero, which demonstrates the commentary given in [SIAM J. Optim., 20 (2010),
2281--2299] and can be thought to be a supplement of the theory of nonlinear
optimization on rapid detection of infeasibility. Some illustrative examples
and preliminary numerical results demonstrate that the proposed method is
robust and efficient in solving infeasible nonlinear problems and a degenerate
problem without LICQ in the literature.Comment: 24 page
Improved analysis of algorithms based on supporting halfspaces and quadratic programming for the convex intersection and feasibility problems
This paper improves the algorithms based on supporting halfspaces and
quadratic programming for convex set intersection problems in our earlier paper
in several directions. First, we give conditions so that much smaller quadratic
programs (QPs) and approximate projections arising from partially solving the
QPs are sufficient for multiple-term superlinear convergence for nonsmooth
problems. Second, we identify additional regularity, which we call the second
order supporting hyperplane property (SOSH), that gives multiple-term quadratic
convergence. Third, we show that these fast convergence results carry over for
the convex inequality problem. Fourth, we show that infeasibility can be
detected in finitely many operations. Lastly, we explain how we can use the
dual active set QP algorithm of Goldfarb and Idnani to get useful iterates by
solving the QPs partially, overcoming the problem of solving large QPs in our
algorithms.Comment: 27 pages, 2 figure
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
- …