287 research outputs found
Exact penalty method for D-stationary point of nonlinear optimization
We consider the nonlinear optimization problem with least -norm
measure of constraint violations and introduce the concepts of the D-stationary
point, the DL-stationary point and the DZ-stationary point with the help of
exact penalty function. If the stationary point is feasible, they correspond to
the Fritz-John stationary point, the KKT stationary point and the singular
stationary point, respectively. In order to show the usefulness of the new
stationary points, we propose a new exact penalty sequential quadratic
programming (SQP) method with inner and outer iterations and analyze its global
and local convergence. The proposed method admits convergence to a D-stationary
point and rapid infeasibility detection without driving the penalty parameter
to zero, which demonstrates the commentary given in [SIAM J. Optim., 20 (2010),
2281--2299] and can be thought to be a supplement of the theory of nonlinear
optimization on rapid detection of infeasibility. Some illustrative examples
and preliminary numerical results demonstrate that the proposed method is
robust and efficient in solving infeasible nonlinear problems and a degenerate
problem without LICQ in the literature.Comment: 24 page
A globally convergent SQP-type method with least constraint violation for nonlinear semidefinite programming
We present a globally convergent SQP-type method with the least constraint
violation for nonlinear semidefinite programming. The proposed algorithm
employs a two-phase strategy coupled with a line search technique. In the first
phase, a subproblem based on a local model of infeasibility is formulated to
determine a corrective step. In the second phase, a search direction that moves
toward optimality is computed by minimizing a local model of the objective
function. Importantly, regardless of the feasibility of the original problem,
the iterative sequence generated by our proposed method converges to a
Fritz-John point of a transformed problem, wherein the constraint violation is
minimized. Numerical experiments have been conducted on various complex
scenarios to demonstrate the effectiveness of our approach.Comment: 34 page
OSQP: An Operator Splitting Solver for Quadratic Programs
We present a general-purpose solver for convex quadratic programs based on
the alternating direction method of multipliers, employing a novel operator
splitting technique that requires the solution of a quasi-definite linear
system with the same coefficient matrix at almost every iteration. Our
algorithm is very robust, placing no requirements on the problem data such as
positive definiteness of the objective function or linear independence of the
constraint functions. It can be configured to be division-free once an initial
matrix factorization is carried out, making it suitable for real-time
applications in embedded systems. In addition, our technique is the first
operator splitting method for quadratic programs able to reliably detect primal
and dual infeasible problems from the algorithm iterates. The method also
supports factorization caching and warm starting, making it particularly
efficient when solving parametrized problems arising in finance, control, and
machine learning. Our open-source C implementation OSQP has a small footprint,
is library-free, and has been extensively tested on many problem instances from
a wide variety of application areas. It is typically ten times faster than
competing interior-point methods, and sometimes much more when factorization
caching or warm start is used. OSQP has already shown a large impact with tens
of thousands of users both in academia and in large corporations
Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates
The primary focus of this dissertation is the design, analysis, and implementation of numerical methods to enhance Sequential Quadratic Optimization (SQO) methods for solving nonlinear constrained optimization problems. These enhancements address issues that challenge the practical limitations of SQO methods. The first part of this dissertation presents a penalty SQO algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A two-phase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first-phase subproblem determines the reduction that can be obtained in a local model of constraint violation. The second-phase subproblem seeks to minimize a local model of a penalty function. The solutions of both subproblems are then combined to form the search direction, in such a way that it yields a reduction in the local model of constraint violation that is proportional to the reduction attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for constrained optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under reasonable assumptions and numerical results are presented for a large set of test problems.In the second part of this dissertation, two matrix-free methods are presented for approximately solving exact penalty subproblems of large scale. The first approach is a novel iterative re-weighting algorithm (IRWA), which iteratively minimizes quadratic models of relaxed subproblems while simultaneously updating a relaxation vector. The second approach recasts the subproblem into a linearly constrained nonsmooth optimization problem and then applies alternating direction augmented Lagrangian (ADAL) technology to solve it. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions, which can be performed matrix-free. Both algorithms are proved to be globally convergent under loose assumptions, and each requires at most iterations to reach -optimality of the objective function. Numerical experiments exhibit the ability of both algorithms to efficiently find inexact solutions. Moreover, in certain cases, IRWA is shown to be more reliable than ADAL. In the final part of this dissertation, we focus on the design of the penalty parameter updating strategy in penalty SQO methods for solving large-scale nonlinear optimization problems. As the most computationally demanding aspect of such an approach is the computation of the search direction during each iteration, we consider the use of matrix-free methods for solving the direction-finding subproblems within SQP methods. This allows for the acceptance of inexact subproblem solutions, which can significantly reduce overall computational costs. In addition, such a method can be plagued by poor behavior of the global convergence mechanism, for which we consider the use of an exact penalty function. To confront this issue, we propose a dynamic penalty parameter updating strategy to be employed within the subproblem solver in such a way that the resulting search direction predicts progress toward both feasibility and optimality. We present our penalty parameter updating strategy and prove that does not decrease the penalty parameter unnecessarily in the neighborhood of points satisfying certain common assumptions. We also discuss two matrix-free subproblem solvers in which our updating strategy can be readily incorporated
- …