5 research outputs found

    A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation

    Full text link
    Sequential Quadratic Programming (SQP) is a powerful class of algorithms for solving nonlinear optimization problems. Local convergence of SQP algorithms is guaranteed when the Hessian approximation used in each Quadratic Programming subproblem is close to the true Hessian. However, a good Hessian approximation can be expensive to compute. Low cost Hessian approximations only guarantee local convergence under some assumptions, which are not always satisfied in practice. To address this problem, this paper proposes a simple method to guarantee local convergence for SQP with poor Hessian approximation. The effectiveness of the proposed algorithm is demonstrated in a numerical example

    A Projected Gradient and Constraint Linearization Method for Nonlinear Model Predictive Control

    No full text
    ISSN:0363-0129ISSN:1095-713

    A projected gradient and constraint linearization method for nonlinear model predictive control

    No full text
    Projected gradient descent denotes a class of iterative methods for solving optimization programs. In convex optimization, its computational complexity is relatively low whenever the projection onto the feasible set is relatively easy to compute. On the other hand, when the problem is nonconvex, e.g., because of nonlinear equality constraints, the projection becomes hard and thus impractical. In this paper, we propose a projected gradient method for nonlinear programs that only requires projections onto the linearization of the nonlinear constraints around the current iterate, similar to sequential quadratic programming (SQP). The proposed method falls neither into the class of projected gradient descent approaches, because the projection is not performed onto the original nonlinear manifold, nor into that of SQP, since second-order information is not used. For nonlinear smooth optimization problems, we assess local and global convergence to a Karush–Kuhn–Tucker point of the original problem. Further, we show that nonlinear model predictive control is a promising application of the proposed method, due to the sparsity of the resulting optimization problem.Team DeSchutte
    corecore