4,493 research outputs found

    On limited memory SQP methods for large scale constrained nonlinear least squares problems

    Get PDF
    This paper describes limited memory Sequential Quadratic Programming methods (LSQP) for a large scale equality constrained nonlinear least squares problem. By introducing additional variables, the original problem is transformed into a general equality constrained nonlinear programming problem with a simple objective. This is then solved by a limited memory variation of SQP methods. This overcomes one of the major drawbacks of the traditional SQP method, where a large matrix needs to be stored, and combines the best performance of the Gauss-Newton and Quasi-Newton methods by a suitable choice of the Lagrangian Hessian approximation. Our numerical tests indicate that the new method is faster than the reduced Hessian (RSQP) method, and is better able to use additional storage to accelerate convergence. For some problems it approaches the performance of the full Hessian SQP (FSQP) method adapted for least squares problems in Schittkowski. However, his method cannot cope with problems with very many observations

    A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation

    Full text link
    Sequential Quadratic Programming (SQP) is a powerful class of algorithms for solving nonlinear optimization problems. Local convergence of SQP algorithms is guaranteed when the Hessian approximation used in each Quadratic Programming subproblem is close to the true Hessian. However, a good Hessian approximation can be expensive to compute. Low cost Hessian approximations only guarantee local convergence under some assumptions, which are not always satisfied in practice. To address this problem, this paper proposes a simple method to guarantee local convergence for SQP with poor Hessian approximation. The effectiveness of the proposed algorithm is demonstrated in a numerical example

    A second derivative SQP method: theoretical issues

    Get PDF
    Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a second-derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent-constraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established

    Combining Homotopy Methods and Numerical Optimal Control to Solve Motion Planning Problems

    Full text link
    This paper presents a systematic approach for computing local solutions to motion planning problems in non-convex environments using numerical optimal control techniques. It extends the range of use of state-of-the-art numerical optimal control tools to problem classes where these tools have previously not been applicable. Today these problems are typically solved using motion planners based on randomized or graph search. The general principle is to define a homotopy that perturbs, or preferably relaxes, the original problem to an easily solved problem. By combining a Sequential Quadratic Programming (SQP) method with a homotopy approach that gradually transforms the problem from a relaxed one to the original one, practically relevant locally optimal solutions to the motion planning problem can be computed. The approach is demonstrated in motion planning problems in challenging 2D and 3D environments, where the presented method significantly outperforms a state-of-the-art open-source optimizing sampled-based planner commonly used as benchmark

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    A sequential semidefinite programming method and an application in passive reduced-order modeling

    Full text link
    We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. In particular, a suitable symmetrization procedure needs to be chosen for the linearization of the complementarity condition. The choice of the symmetrization procedure can be shifted in a very natural way to certain linear semidefinite subproblems, and can thus be reduced to a well-studied problem. The resulting sequential semidefinite programming (SSP) method is a generalization of the well-known SQP method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class

    Optimal analog wavelet bases construction using hybrid optimization algorithm

    Get PDF
    An approach for the construction of optimal analog wavelet bases is presented. First, the definition of an analog wavelet is given. Based on the definition and the least-squares error criterion, a general framework for designing optimal analog wavelet bases is established, which is one of difficult nonlinear constrained optimization problems. Then, to solve this problem, a hybrid algorithm by combining chaotic map particle swarm optimization (CPSO) with local sequential quadratic programming (SQP) is proposed. CPSO is an improved PSO in which the saw tooth chaotic map is used to raise its global search ability. CPSO is a global optimizer to search the estimates of the global solution, while the SQP is employed for the local search and refining the estimates. Benefiting from good global search ability of CPSO and powerful local search ability of SQP, a high-precision global optimum in this problem can be gained. Finally, a series of optimal analog wavelet bases are constructed using the hybrid algorithm. The proposed method is tested for various wavelet bases and the improved performance is compared with previous works.Peer reviewedFinal Published versio

    Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization

    Full text link
    This paper proposes an algorithmic framework for solving parametric optimization problems which we call adjoint-based predictor-corrector sequential convex programming. After presenting the algorithm, we prove a contraction estimate that guarantees the tracking performance of the algorithm. Two variants of this algorithm are investigated. The first one can be used to solve nonlinear programming problems while the second variant is aimed to treat online parametric nonlinear programming problems. The local convergence of these variants is proved. An application to a large-scale benchmark problem that originates from nonlinear model predictive control of a hydro power plant is implemented to examine the performance of the algorithms.Comment: This manuscript consists of 25 pages and 7 figure
    corecore