331 research outputs found
A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints
A computationally efficient method to solve non-convex programming problems
with linear equality constraints is presented. The proposed method is based on
a recursively feasible and descending sequential convex programming procedure
proven to converge to a locally optimal solution. Assuming that the first
convex problem in the sequence is feasible, these properties are obtained by
convexifying the non-convex cost and inequality constraints with inner-convex
approximations. Additionally, a computationally efficient method is introduced
to obtain inner-convex approximations based on Taylor series expansions. These
Taylor-based inner-convex approximations provide the overall algorithm with a
quadratic rate of convergence. The proposed method is capable of solving
problems of practical interest in real-time. This is illustrated with a
numerical simulation of an aerial vehicle trajectory optimization problem on
commercial-of-the-shelf embedded computers
A Partially Feasible Distributed SQO Method for Two-block General Linearly Constrained Smooth Optimization
This paper discusses a class of two-block smooth large-scale optimization
problems with both linear equality and linear inequality constraints, which
have a wide range of applications, such as economic power dispatch, data
mining, signal processing, etc.Our goal is to develop a novel partially
feasible distributed (PFD) sequential quadratic optimization (SQO) method
(PFD-SQO method) for this kind of problems. The design of the method is based
on the ideas of SQO method and augmented Lagrangian Jacobian splitting scheme
as well as feasible direction method,which decomposes the quadratic
optimization (QO) subproblem into two small-scale QOs that can be solved
independently and parallelly. A novel disturbance contraction term that can be
suitably adjusted is introduced into the inequality constraints so that the
feasible step size along the search direction can be increased to 1. The new
iteration points are generated by the Armijo line search and the partially
augmented Lagrangian function that only contains equality constraints as the
merit function. The iteration points always satisfy all the inequality
constraints of the problem. The theoretical properties, such as global
convergence, iterative complexity, superlinear and quadratic rates of
convergence of the proposed PFD-SQO method are analyzed under appropriate
assumptions, respectively. Finally, the numerical effectiveness of the method
is tested on a class of academic examples and an economic power dispatch
problem, which shows that the proposed method is quite promising
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
A second derivative SQP method: local convergence
In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact â„“1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud
\ud
Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud
\ud
Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set
On the Local and Global Convergence of a Reduced Quasi-Newton Method
In optimization in R^n with m nonlinear equality constraints, we study the local convergence of reduced quasi-Newton methods, in which the updated matrix is of order n-m. In particular, we give necessary and sufficient conditions for q-superlinear convergence (in one step). We introduce a device to globalize the local algorithm which consists in determining a step on an arc in order to decrease an exact penalty function. We give conditions so that asymptotically the step will be equal to one
An interior-point method for mpecs based on strictly feasible relaxations.
An interior-point method for solving mathematical programs with equilibrium constraints (MPECs) is proposed. At each iteration of the algorithm, a single primaldual step is computed from each subproblem of a sequence. Each subproblem is defined as a relaxation of the MPEC with a nonempty strictly feasible region. In contrast to previous approaches, the proposed relaxation scheme preserves the nonempty strict feasibility of each subproblem even in the limit. Local and superlinear convergence of the algorithm is proved even with a less restrictive strict complementarity condition than the standard one. Moreover, mechanisms for inducing global convergence in practice are proposed. Numerical results on the MacMPEC test problem set demonstrate the fast-local convergence properties of the algorithm
- …