915 research outputs found

    A Parallel Riccati Factorization Algorithm with Applications to Model Predictive Control

    Full text link
    Model Predictive Control (MPC) is increasing in popularity in industry as more efficient algorithms for solving the related optimization problem are developed. The main computational bottle-neck in on-line MPC is often the computation of the search step direction, i.e. the Newton step, which is often done using generic sparsity exploiting algorithms or Riccati recursions. However, as parallel hardware is becoming increasingly popular the demand for efficient parallel algorithms for solving the Newton step is increasing. In this paper a tailored, non-iterative parallel algorithm for computing the Riccati factorization is presented. The algorithm exploits the special structure in the MPC problem, and when sufficiently many processing units are available, the complexity of the algorithm scales logarithmically in the prediction horizon. Computing the Newton step is the main computational bottle-neck in many MPC algorithms and the algorithm can significantly reduce the computation cost for popular state-of-the-art MPC algorithms

    Intermittent predictive control of an inverted pendulum

    Get PDF
    Intermittent predictive pole-placement control is successfully applied to the constrained-state control of a prestabilised experimental inverted pendulum

    Controlling the level of sparsity in MPC

    Full text link
    In optimization routines used for on-line Model Predictive Control (MPC), linear systems of equations are usually solved in each iteration. This is true both for Active Set (AS) methods as well as for Interior Point (IP) methods, and for linear MPC as well as for nonlinear MPC and hybrid MPC. The main computational effort is spent while solving these linear systems of equations, and hence, it is of greatest interest to solve them efficiently. Classically, the optimization problem has been formulated in either of two different ways. One of them leading to a sparse linear system of equations involving relatively many variables to solve in each iteration and the other one leading to a dense linear system of equations involving relatively few variables. In this work, it is shown that it is possible not only to consider these two distinct choices of formulations. Instead it is shown that it is possible to create an entire family of formulations with different levels of sparsity and number of variables, and that this extra degree of freedom can be exploited to get even better performance with the software and hardware at hand. This result also provides a better answer to an often discussed question in MPC; should the sparse or dense formulation be used. In this work, it is shown that the answer to this question is that often none of these classical choices is the best choice, and that a better choice with a different level of sparsity actually can be found
    • …
    corecore