553 research outputs found
A Condensed and Sparse QP Formulation for Predictive Control
Abstract-The computational burden that model predictive control (MPC) imposes depends to a large extent on the way the optimal control problem is formulated as an optimization problem. In this paper, we present a new formulation that results in a compact and sparse optimization problem to be solved at each sampling interval. The approach is based on a change of variables that leads to a block banded Hessian when the horizon length is bigger than the controllability index of the plant. In this case the problem can be solved with an interiorpoint method in time linear in the horizon length. Existing dense approaches grow cubically with the horizon length, whereas existing sparse approaches grow at a significantly greater rate than with the method presented here. I. INTRODUCTION Applications with fast dynamics impose high sampling frequency requirements on their control loops. In linear MPC, the optimal control input at every sampling instant is determined through the solution of a convex optimization problem with a quadratic cost and linear constraints. The very high computational demands stand as a barrier that has prevented the widespread use of MPC in applications with fast dynamics that could otherwise benefit from MPC's natural ability to deal with physical constraints. One method for reducing the computation time is to compute a piece-wise linear feedback control law offline as an explicit function of the current state of the plant The method employed when formulating the MPC problem as a quadratic program (QP) has a big impact on the problem size and structure and the resulting computational and memory requirements, as well as on the numerical conditioning. The standard approach makes use of the plant dynamics to eliminate the states from the decision variables by expressing them as an explicit function of the current state and future control input
Controlling the level of sparsity in MPC
In optimization routines used for on-line Model Predictive Control (MPC),
linear systems of equations are usually solved in each iteration. This is true
both for Active Set (AS) methods as well as for Interior Point (IP) methods,
and for linear MPC as well as for nonlinear MPC and hybrid MPC. The main
computational effort is spent while solving these linear systems of equations,
and hence, it is of greatest interest to solve them efficiently. Classically,
the optimization problem has been formulated in either of two different ways.
One of them leading to a sparse linear system of equations involving relatively
many variables to solve in each iteration and the other one leading to a dense
linear system of equations involving relatively few variables. In this work, it
is shown that it is possible not only to consider these two distinct choices of
formulations. Instead it is shown that it is possible to create an entire
family of formulations with different levels of sparsity and number of
variables, and that this extra degree of freedom can be exploited to get even
better performance with the software and hardware at hand. This result also
provides a better answer to an often discussed question in MPC; should the
sparse or dense formulation be used. In this work, it is shown that the answer
to this question is that often none of these classical choices is the best
choice, and that a better choice with a different level of sparsity actually
can be found
Exploiting Chordality in Optimization Algorithms for Model Predictive Control
In this chapter we show that chordal structure can be used to devise
efficient optimization methods for many common model predictive control
problems. The chordal structure is used both for computing search directions
efficiently as well as for distributing all the other computations in an
interior-point method for solving the problem. The chordal structure can stem
both from the sequential nature of the problem as well as from distributed
formulations of the problem related to scenario trees or other formulations.
The framework enables efficient parallel computations.Comment: arXiv admin note: text overlap with arXiv:1502.0638
Custom optimization algorithms for efficient hardware implementation
The focus is on real-time optimal decision making with application in advanced control
systems. These computationally intensive schemes, which involve the repeated solution of
(convex) optimization problems within a sampling interval, require more efficient computational
methods than currently available for extending their application to highly dynamical
systems and setups with resource-constrained embedded computing platforms.
A range of techniques are proposed to exploit synergies between digital hardware, numerical
analysis and algorithm design. These techniques build on top of parameterisable
hardware code generation tools that generate VHDL code describing custom computing
architectures for interior-point methods and a range of first-order constrained optimization
methods. Since memory limitations are often important in embedded implementations we
develop a custom storage scheme for KKT matrices arising in interior-point methods for
control, which reduces memory requirements significantly and prevents I/O bandwidth
limitations from affecting the performance in our implementations. To take advantage of
the trend towards parallel computing architectures and to exploit the special characteristics
of our custom architectures we propose several high-level parallel optimal control
schemes that can reduce computation time. A novel optimization formulation was devised
for reducing the computational effort in solving certain problems independent of the computing
platform used. In order to be able to solve optimization problems in fixed-point
arithmetic, which is significantly more resource-efficient than floating-point, tailored linear
algebra algorithms were developed for solving the linear systems that form the computational
bottleneck in many optimization methods. These methods come with guarantees
for reliable operation. We also provide finite-precision error analysis for fixed-point implementations
of first-order methods that can be used to minimize the use of resources while
meeting accuracy specifications. The suggested techniques are demonstrated on several
practical examples, including a hardware-in-the-loop setup for optimization-based control
of a large airliner.Open Acces
- …