23,009 research outputs found

    Stabilizing Stochastic Predictive Control under Bernoulli Dropouts

    Full text link
    This article presents tractable and recursively feasible optimization-based controllers for stochastic linear systems with bounded controls. The stochastic noise in the plant is assumed to be additive, zero mean and fourth moment bounded, and the control values transmitted over an erasure channel. Three different transmission protocols are proposed having different requirements on the storage and computational facilities available at the actuator. We optimize a suitable stochastic cost function accounting for the effects of both the stochastic noise and the packet dropouts over affine saturated disturbance feedback policies. The proposed controllers ensure mean square boundedness of the states in closed-loop for all positive values of control bounds and any non-zero probability of successful transmission over a noisy control channel

    Min–max MPC using a tractable QP problem

    Get PDF
    Min–max model predictive controllers (MMMPC) suffer from a great computational burden that is often circumvented by using approximate solutions or upper bounds of the worst possible case of a performance index. This paper proposes a computationally efficient MMMPC control strategy in which a close approximation of the solution of the min–max problem is computed using a quadratic programming problem. The overall computational burden is much lower than that of the min–max problem and the resulting control is shown to have a guaranteed stability. A simulation example is given in the paper

    Computational burden reduction in Min-Max MPC

    Get PDF
    Min–max model predictive control (MMMPC) is one of the strategies used to control plants subject to bounded uncertainties. The implementation of MMMPC suffers a large computational burden due to the complex numerical optimization problem that has to be solved at every sampling time. This paper shows how to overcome this by transforming the original problem into a reduced min–max problem whose solution is much simpler. In this way, the range of processes to which MMMPC can be applied is considerably broadened. Proofs based on the properties of the cost function and simulation examples are given in the paper

    Sum-of-Squares approach to feedback control of laminar wake flows

    Get PDF
    A novel nonlinear feedback control design methodology for incompressible fluid flows aiming at the optimisation of long-time averages of flow quantities is presented. It applies to reduced-order finite-dimensional models of fluid flows, expressed as a set of first-order nonlinear ordinary differential equations with the right-hand side being a polynomial function in the state variables and in the controls. The key idea, first discussed in Chernyshenko et al. 2014, Philos. T. Roy. Soc. 372(2020), is that the difficulties of treating and optimising long-time averages of a cost are relaxed by using the upper/lower bounds of such averages as the objective function. In this setting, control design reduces to finding a feedback controller that optimises the bound, subject to a polynomial inequality constraint involving the cost function, the nonlinear system, the controller itself and a tunable polynomial function. A numerically tractable approach to the solution of such optimisation problems, based on Sum-of-Squares techniques and semidefinite programming, is proposed. To showcase the methodology, the mitigation of the fluctuation kinetic energy in the unsteady wake behind a circular cylinder in the laminar regime at Re=100, via controlled angular motions of the surface, is numerically investigated. A compact reduced-order model that resolves the long-term behaviour of the fluid flow and the effects of actuation, is derived using Proper Orthogonal Decomposition and Galerkin projection. In a full-information setting, feedback controllers are then designed to reduce the long-time average of the kinetic energy associated with the limit cycle. These controllers are then implemented in direct numerical simulations of the actuated flow. Control performance, energy efficiency, and physical control mechanisms identified are analysed. Key elements, implications and future work are discussed

    State-Space Interpretation of Model Predictive Control

    Get PDF
    A model predictive control technique based on a step response model is developed using state estimation techniques. The standard step response model is extended so that integrating systems can be treated within the same framework. Based on the modified step response model, it is shown how the state estimation techniques from stochastic optimal control can be used to construct the optimal prediction vector without introducing significant additional numerical complexity. In the case of integrated or double integrated white noise disturbances filtered through general first-order dynamics and white measurement noise, the optimal filter gain is parametrized explicitly in terms of a single parameter between 0 and 1, thus removing the requirement for solving a Riccati equation and equipping the control system with useful on-line tuning parameters. Parallels are drawn to the existing MPC techniques such as Dynamic Matrix Control (DMC), Internal Model Control (IMC) and Generalized Predictive Control (GPC)

    Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems

    Get PDF
    Learning-based control algorithms require data collection with abundant supervision for training. Safe exploration algorithms ensure the safety of this data collection process even when only partial knowledge is available. We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained stochastic optimal control with dynamics learning and feedback control. We derive an iterative convex optimization algorithm that solves an \underline{Info}rmation-cost \underline{S}tochastic \underline{N}onlinear \underline{O}ptimal \underline{C}ontrol problem (Info-SNOC). The optimization objective encodes both optimal performance and exploration for learning, and the safety is incorporated as distributionally robust chance constraints. The dynamics are predicted from a robust regression model that is learned from data. The Info-SNOC algorithm is used to compute a sub-optimal pool of safe motion plans that aid in exploration for learning unknown residual dynamics under safety constraints. A stable feedback controller is used to execute the motion plan and collect data for model learning. We prove the safety of rollout from our exploration method and reduction in uncertainty over epochs, thereby guaranteeing the consistency of our learning method. We validate the effectiveness of Info-SNOC by designing and implementing a pool of safe trajectories for a planar robot. We demonstrate that our approach has higher success rate in ensuring safety when compared to a deterministic trajectory optimization approach.Comment: Submitted to RA-L 2020, review-
    • …
    corecore