1,137 research outputs found
A Primal-Dual Augmented Lagrangian
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primal-dual variants of conventional primal methods are proposed: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual 1 linearly constrained Lagrangian (pd1-LCL) method
OSQP: An Operator Splitting Solver for Quadratic Programs
We present a general-purpose solver for convex quadratic programs based on
the alternating direction method of multipliers, employing a novel operator
splitting technique that requires the solution of a quasi-definite linear
system with the same coefficient matrix at almost every iteration. Our
algorithm is very robust, placing no requirements on the problem data such as
positive definiteness of the objective function or linear independence of the
constraint functions. It can be configured to be division-free once an initial
matrix factorization is carried out, making it suitable for real-time
applications in embedded systems. In addition, our technique is the first
operator splitting method for quadratic programs able to reliably detect primal
and dual infeasible problems from the algorithm iterates. The method also
supports factorization caching and warm starting, making it particularly
efficient when solving parametrized problems arising in finance, control, and
machine learning. Our open-source C implementation OSQP has a small footprint,
is library-free, and has been extensively tested on many problem instances from
a wide variety of application areas. It is typically ten times faster than
competing interior-point methods, and sometimes much more when factorization
caching or warm start is used. OSQP has already shown a large impact with tens
of thousands of users both in academia and in large corporations
Local quadratic convergence of polynomial-time interior-point methods for conic optimization problems
In this paper, we establish a local quadratic convergence of polynomial-time interior-point methods for general conic optimization problems. The main structural property used in our analysis is the logarithmic homogeneity of self-concordant barrier functions. We propose new path-following predictor-corrector schemes which work only in the dual space. They are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the local quadratic one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary). It appears that in order to obtain local superlinear convergence, we need to tighten the neighborhood of the central path proportionally to the current duality gapconic optimization problem, worst-case complexity analysis, self-concordant barriers, polynomial-time methods, predictor-corrector methods, local quadratic convergence
- …