13,252 research outputs found

    Constrained LQR for Low-Precision Data Representation

    Get PDF
    Performing computations with a low-bit number representation results in a faster implementation that uses less silicon, and hence allows an algorithm to be implemented in smaller and cheaper processors without loss of performance. We propose a novel formulation to efficiently exploit the low (or non-standard) precision number representation of some computer architectures when computing the solution to constrained LQR problems, such as those that arise in predictive control. The main idea is to include suitably-defined decision variables in the quadratic program, in addition to the states and the inputs, to allow for smaller roundoff errors in the solver. This enables one to trade off the number of bits used for data representation against speed and/or hardware resources, so that smaller numerical errors can be achieved for the same number of bits (same silicon area). Because of data dependencies, the algorithm complexity, in terms of computation time and hardware resources, does not necessarily increase despite the larger number of decision variables. Examples show that a 10-fold reduction in hardware resources is possible compared to using double precision floating point, without loss of closed-loop performance

    Computer Architectures to Close the Loop in Real-time Optimization

    Get PDF
    © 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other

    Analytical results for the multi-objective design of model-predictive control

    Full text link
    In model-predictive control (MPC), achieving the best closed-loop performance under a given computational resource is the underlying design consideration. This paper analyzes the MPC design problem with control performance and required computational resource as competing design objectives. The proposed multi-objective design of MPC (MOD-MPC) approach extends current methods that treat control performance and the computational resource separately -- often with the latter as a fixed constraint -- which requires the implementation hardware to be known a priori. The proposed approach focuses on the tuning of structural MPC parameters, namely sampling time and prediction horizon length, to produce a set of optimal choices available to the practitioner. The posed design problem is then analyzed to reveal key properties, including smoothness of the design objectives and parameter bounds, and establish certain validated guarantees. Founded on these properties, necessary and sufficient conditions for an effective and efficient solver are presented, leading to a specialized multi-objective optimizer for the MOD-MPC being proposed. Finally, two real-world control problems are used to illustrate the results of the design approach and importance of the developed conditions for an effective solver of the MOD-MPC problem

    Reduced Order Modeling for Nonlinear PDE-constrained Optimization using Neural Networks

    Full text link
    Nonlinear model predictive control (NMPC) often requires real-time solution to optimization problems. However, in cases where the mathematical model is of high dimension in the solution space, e.g. for solution of partial differential equations (PDEs), black-box optimizers are rarely sufficient to get the required online computational speed. In such cases one must resort to customized solvers. This paper present a new solver for nonlinear time-dependent PDE-constrained optimization problems. It is composed of a sequential quadratic programming (SQP) scheme to solve the PDE-constrained problem in an offline phase, a proper orthogonal decomposition (POD) approach to identify a lower dimensional solution space, and a neural network (NN) for fast online evaluations. The proposed method is showcased on a regularized least-square optimal control problem for the viscous Burgers' equation. It is concluded that significant online speed-up is achieved, compared to conventional methods using SQP and finite elements, at a cost of a prolonged offline phase and reduced accuracy.Comment: Accepted for publishing at the 58th IEEE Conference on Decision and Control, Nice, France, 11-13 December, https://cdc2019.ieeecss.org

    Energy-aware MPC co-design for DC-DC converters

    Get PDF
    In this paper, we propose an integrated controller design methodology for the implementation of an energy-aware explicit model predictive control (MPC) algorithms, illustrat- ing the method on a DC-DC converter model. The power consumption of control algorithms is becoming increasingly important for low-power embedded systems, especially where complex digital control techniques, like MPC, are used. For DC-DC converters, digital control provides better regulation, but also higher energy consumption compared to standard analog methods. To overcome the limitation in energy efficiency, instead of addressing the problem by implementing sub-optimal MPC schemes, the closed-loop performance and the control algorithm power consumption are minimized in a joint cost function, allowing us to keep the controller power efficiency closer to an analog approach while maintaining closed-loop op- timality. A case study for an implementation in reconfigurable hardware shows how a designer can optimally trade closed-loop performance with hardware implementation performance

    Number representation in predictive control

    Get PDF
    In predictive control a nonlinear optimization problem has to be solved at each sample instant. Solving this optimization problem in a computationally efficient and numerically reliable fashion on an embedded system is a challenging task. This paper presents results to reduce the computational requirements for solving fundamental problems that arise when implementing predictive controllers in finite precision arithmetic. By employing novel formulations and tailor-made optimization algorithms, this paper shows that computational resources can be reduced using very low precision arithmetic. We also present new mathematical results that enable computational savings to be made in the most numerically critical part of an optimization solver, namely the linear algebra kernel, using fixed-point arithmetic. Our theoretical results are supported by numerical results from implementations on a Field Programmable Gate Array (FPGA)

    Fixed-point implementation of a proximal Newton method for embedded model predictive control (I)

    Get PDF
    Extending the success of model predictive control (MPC) technologies in embedded applications heavily depends on the capability of improving quadratic programming (QP) solvers. Improvements can be done in two directions: better algorithms that reduce the number of arithmetic operations required to compute a solution, and more efficient architectures in terms of speed, power consumption, memory occupancy and cost. This paper proposes a fixed point implementation of a proximal Newton method to solve optimization problems arising in input-constrained MPC. The main advantages of the algorithm are its fast asymptotic convergence rate and its relatively low computational cost per iteration since it the solution of a small linear system is required. A detailed analysis on the effects of quantization errors is presented, showing the robustness of the algorithm with respect to finite-precision computations. A hardware implementation with specific optimizations to minimize computation times and memory footprint is also described, demonstrating the viability of low-cost, low-power controllers for high-bandwidth MPC applications. The algorithm is shown to be very effective for embedded MPC applications through a number of simulation experiments

    Non-Linear Model Predictive Control with Adaptive Time-Mesh Refinement

    Full text link
    In this paper, we present a novel solution for real-time, Non-Linear Model Predictive Control (NMPC) exploiting a time-mesh refinement strategy. The proposed controller formulates the Optimal Control Problem (OCP) in terms of flat outputs over an adaptive lattice. In common approximated OCP solutions, the number of discretization points composing the lattice represents a critical upper bound for real-time applications. The proposed NMPC-based technique refines the initially uniform time horizon by adding time steps with a sampling criterion that aims to reduce the discretization error. This enables a higher accuracy in the initial part of the receding horizon, which is more relevant to NMPC, while keeping bounded the number of discretization points. By combining this feature with an efficient Least Square formulation, our solver is also extremely time-efficient, generating trajectories of multiple seconds within only a few milliseconds. The performance of the proposed approach has been validated in a high fidelity simulation environment, by using an UAV platform. We also released our implementation as open source C++ code.Comment: In: 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR 2018
    corecore