45 research outputs found
A hierarchical time-splitting approach for solving finite-time optimal control problems
We present a hierarchical computation approach for solving finite-time
optimal control problems using operator splitting methods. The first split is
performed over the time index and leads to as many subproblems as the length of
the prediction horizon. Each subproblem is solved in parallel and further split
into three by separating the objective from the equality and inequality
constraints respectively, such that an analytic solution can be achieved for
each subproblem. The proposed solution approach leads to a nested decomposition
scheme, which is highly parallelizable. We present a numerical comparison with
standard state-of-the-art solvers, and provide analytic solutions to several
elements of the algorithm, which enhances its applicability in fast large-scale
applications
A Parallel Riccati Factorization Algorithm with Applications to Model Predictive Control
Model Predictive Control (MPC) is increasing in popularity in industry as
more efficient algorithms for solving the related optimization problem are
developed. The main computational bottle-neck in on-line MPC is often the
computation of the search step direction, i.e. the Newton step, which is often
done using generic sparsity exploiting algorithms or Riccati recursions.
However, as parallel hardware is becoming increasingly popular the demand for
efficient parallel algorithms for solving the Newton step is increasing. In
this paper a tailored, non-iterative parallel algorithm for computing the
Riccati factorization is presented. The algorithm exploits the special
structure in the MPC problem, and when sufficiently many processing units are
available, the complexity of the algorithm scales logarithmically in the
prediction horizon. Computing the Newton step is the main computational
bottle-neck in many MPC algorithms and the algorithm can significantly reduce
the computation cost for popular state-of-the-art MPC algorithms
Parameter Selection and Pre-Conditioning for a Graph Form Solver
In a recent paper, Parikh and Boyd describe a method for solving a convex
optimization problem, where each iteration involves evaluating a proximal
operator and projection onto a subspace. In this paper we address the critical
practical issues of how to select the proximal parameter in each iteration, and
how to scale the original problem variables, so as the achieve reliable
practical performance. The resulting method has been implemented as an
open-source software package called POGS (Proximal Graph Solver), that targets
multi-core and GPU-based systems, and has been tested on a wide variety of
practical problems. Numerical results show that POGS can solve very large
problems (with, say, more than a billion coefficients in the data), to modest
accuracy in a few tens of seconds. As just one example, a radiation treatment
planning problem with around 100 million coefficients in the data can be solved
in a few seconds, as compared to around one hour with an interior-point method.Comment: 28 pages, 1 figure, 1 open source implementatio
OSQP: An Operator Splitting Solver for Quadratic Programs
We present a general-purpose solver for convex quadratic programs based on
the alternating direction method of multipliers, employing a novel operator
splitting technique that requires the solution of a quasi-definite linear
system with the same coefficient matrix at almost every iteration. Our
algorithm is very robust, placing no requirements on the problem data such as
positive definiteness of the objective function or linear independence of the
constraint functions. It can be configured to be division-free once an initial
matrix factorization is carried out, making it suitable for real-time
applications in embedded systems. In addition, our technique is the first
operator splitting method for quadratic programs able to reliably detect primal
and dual infeasible problems from the algorithm iterates. The method also
supports factorization caching and warm starting, making it particularly
efficient when solving parametrized problems arising in finance, control, and
machine learning. Our open-source C implementation OSQP has a small footprint,
is library-free, and has been extensively tested on many problem instances from
a wide variety of application areas. It is typically ten times faster than
competing interior-point methods, and sometimes much more when factorization
caching or warm start is used. OSQP has already shown a large impact with tens
of thousands of users both in academia and in large corporations