4,585 research outputs found

    A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation

    Full text link
    Sequential Quadratic Programming (SQP) is a powerful class of algorithms for solving nonlinear optimization problems. Local convergence of SQP algorithms is guaranteed when the Hessian approximation used in each Quadratic Programming subproblem is close to the true Hessian. However, a good Hessian approximation can be expensive to compute. Low cost Hessian approximations only guarantee local convergence under some assumptions, which are not always satisfied in practice. To address this problem, this paper proposes a simple method to guarantee local convergence for SQP with poor Hessian approximation. The effectiveness of the proposed algorithm is demonstrated in a numerical example

    A sequential semidefinite programming method and an application in passive reduced-order modeling

    Full text link
    We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. In particular, a suitable symmetrization procedure needs to be chosen for the linearization of the complementarity condition. The choice of the symmetrization procedure can be shifted in a very natural way to certain linear semidefinite subproblems, and can thus be reduced to a well-studied problem. The resulting sequential semidefinite programming (SSP) method is a generalization of the well-known SQP method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class

    Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization

    Full text link
    This paper proposes an algorithmic framework for solving parametric optimization problems which we call adjoint-based predictor-corrector sequential convex programming. After presenting the algorithm, we prove a contraction estimate that guarantees the tracking performance of the algorithm. Two variants of this algorithm are investigated. The first one can be used to solve nonlinear programming problems while the second variant is aimed to treat online parametric nonlinear programming problems. The local convergence of these variants is proved. An application to a large-scale benchmark problem that originates from nonlinear model predictive control of a hydro power plant is implemented to examine the performance of the algorithms.Comment: This manuscript consists of 25 pages and 7 figure

    A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints

    Get PDF
    A computationally efficient method to solve non-convex programming problems with linear equality constraints is presented. The proposed method is based on a recursively feasible and descending sequential convex programming procedure proven to converge to a locally optimal solution. Assuming that the first convex problem in the sequence is feasible, these properties are obtained by convexifying the non-convex cost and inequality constraints with inner-convex approximations. Additionally, a computationally efficient method is introduced to obtain inner-convex approximations based on Taylor series expansions. These Taylor-based inner-convex approximations provide the overall algorithm with a quadratic rate of convergence. The proposed method is capable of solving problems of practical interest in real-time. This is illustrated with a numerical simulation of an aerial vehicle trajectory optimization problem on commercial-of-the-shelf embedded computers

    Interior-point solver for convex separable block-angular problems

    Get PDF
    Constraints matrices with block-angular structures are pervasive in Optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solved the normal equations using sparse Cholesky factorizations for the block constraints, and a preconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work we present an efficient solver based on this algorithm. Some of its features are: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor-corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the precondi- tioner). The solver has been hooked to SML, a structure-conveying modelling language based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using l1 and l2 distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to l1 which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300000 constraints, this approach is from two to three orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.Preprin
    corecore