74 research outputs found
A second derivative SQP method: local convergence
In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud
\ud
Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud
\ud
Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set
A Partially Feasible Distributed SQO Method for Two-block General Linearly Constrained Smooth Optimization
This paper discusses a class of two-block smooth large-scale optimization
problems with both linear equality and linear inequality constraints, which
have a wide range of applications, such as economic power dispatch, data
mining, signal processing, etc.Our goal is to develop a novel partially
feasible distributed (PFD) sequential quadratic optimization (SQO) method
(PFD-SQO method) for this kind of problems. The design of the method is based
on the ideas of SQO method and augmented Lagrangian Jacobian splitting scheme
as well as feasible direction method,which decomposes the quadratic
optimization (QO) subproblem into two small-scale QOs that can be solved
independently and parallelly. A novel disturbance contraction term that can be
suitably adjusted is introduced into the inequality constraints so that the
feasible step size along the search direction can be increased to 1. The new
iteration points are generated by the Armijo line search and the partially
augmented Lagrangian function that only contains equality constraints as the
merit function. The iteration points always satisfy all the inequality
constraints of the problem. The theoretical properties, such as global
convergence, iterative complexity, superlinear and quadratic rates of
convergence of the proposed PFD-SQO method are analyzed under appropriate
assumptions, respectively. Finally, the numerical effectiveness of the method
is tested on a class of academic examples and an economic power dispatch
problem, which shows that the proposed method is quite promising
On the constant positive linear dependence condition and its application to SQP methods
2000-2001 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Kontinuierliche Optimierung und Industrieanwendungen
[no abstract available
- …