67,617 research outputs found

    Numerical Simulations on Feasibility of Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

    Get PDF
    The random dither quantization method enables us to achieve much better performance than the simple uniform quantization method for the design of quantized control systems. Motivated by this fact, the stochastic model predictive control method in which a performance index is minimized subject to probabilistic constraints imposed on the state variables of systems has been proposed for linear feedback control systems with random dither quantization. In other words, a method for solving optimal control problems subject to probabilistic state constraints for linear discrete-time control systems with random dither quantization has been already established. To our best knowledge, however, the feasibility of such a kind of optimal control problems has not yet been studied. Our objective in this paper is to investigate the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization. To this end, we provide the results of numerical simulations that verify the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization

    A Comparative Study of Stochastic Model Predictive Controllers

    Full text link
    [EN] A comparative study of two state-of-the-art stochastic model predictive controllers for linear systems with parametric and additive uncertainties is presented. On the one hand, Stochastic Model Predictive Control (SMPC) is based on analytical methods and solves an optimal control problem (OCP) similar to a classic Model Predictive Control (MPC) with constraints. SMPC defines probabilistic constraints on the states, which are transformed into equivalent deterministic ones. On the other hand, Scenario-based Model Predictive Control (SCMPC) solves an OCP for a specified number of random realizations of uncertainties, also called scenarios. In this paper, Classic MPC, SMPC and SCMPC are compared through two numerical examples. Thanks to several Monte-Carlo simulations, performances of classic MPC, SMPC and SCMPC are compared using several criteria, such as number of successful runs, number of times the constraints are violated, integral absolute error and computational cost. Moreover, a Stochastic Model Predictive Control Toolbox was developed by the authors, available on MATLAB Central, in which it is possible to simulate a SMPC or a SCMPC to control multivariable linear systems with additive disturbances. This software was used to carry out part of the simulations of the numerical examples in this article and it can be used for results reproduction.Gonzalez, E.; Sanchís Saez, J.; Garcia-Nieto, S.; Salcedo-Romero-De-Ávila, J. (2020). A Comparative Study of Stochastic Model Predictive Controllers. Electronics. 9(12):1-22. https://doi.org/10.3390/electronics9122078S12291

    Model Predictive Control of Stochastic Linear Systems with Probability Constraints

    Get PDF
    This paper presents a strategy for computing model predictive control of linear Gaussian noise systems with probability constraints. As usual, constraints are taken on the system state and control input. The novelty relies on setting bounds on the underlying cumulative probability distribution, and showing that the model predictive control can be computed in an efficient manner through these novel bounds— an application confirms this assertion. Indeed real-time experiments were carried out to control a direct current (DC) motor. The corresponding data show the effectiveness and usefulness of the approach

    Distributionally Robust Chance Constrained Data-enabled Predictive Control

    Full text link
    We study the problem of finite-time constrained optimal control of unknown stochastic linear time-invariant systems, which is the key ingredient of a predictive control algorithm -- albeit typically having access to a model. We propose a novel distributionally robust data-enabled predictive control (DeePC) algorithm which uses noise-corrupted input/output data to predict future trajectories and compute optimal control inputs while satisfying output chance constraints. The algorithm is based on (i) a non-parametric representation of the subspace spanning the system behaviour, where past trajectories are sorted in Page or Hankel matrices; and (ii) a distributionally robust optimization formulation which gives rise to strong probabilistic performance guarantees. We show that for certain objective functions, DeePC exhibits strong out-of-sample performance, and at the same time respects constraints with high probability. The algorithm provides an end-to-end approach to control design for unknown stochastic linear time-invariant systems. We illustrate the closed-loop performance of the DeePC in an aerial robotics case study

    Control Lyapunov-Barrier Function Based Model Predictive Control for Stochastic Nonlinear Affine Systems

    Full text link
    A stochastic model predictive control (MPC) framework is presented in this paper for nonlinear affine systems with stability and feasibility guarantee. We first introduce the concept of stochastic control Lyapunov-barrier function (CLBF) and provide a method to construct CLBF by combining an unconstrained control Lyapunov function (CLF) and control barrier functions. The unconstrained CLF is obtained from its corresponding semi-linear system through dynamic feedback linearization. Based on the constructed CLBF, we utilize sampled-data MPC framework to deal with states and inputs constraints, and to analyze stability of closed-loop systems. Moreover, event-triggering mechanisms are integrated into MPC framework to improve performance during sampling intervals. The proposed CLBF based stochastic MPC is validated via an obstacle avoidance example.Comment: 21 pages, 6 figure

    Stochastic Model Predictive Control with Dynamic Chance Constraints

    Full text link
    In this work, we introduce a stochastic model predictive control scheme for dynamic chance constraints. We consider linear discrete-time systems affected by unbounded additive stochastic disturbance and subject to chance constraints that are defined by time-varying probabilities with a common, fixed lower bound. By utilizing probabilistic reachable tubes with dynamic cross-sections, we are reformulating the stochastic optimization problem into a deterministic tube-based MPC problem with time-varying tightened constraints. We show that the resulting deterministic MPC formulation with dynamic tightened constraints is recursively feasible and that the closed-loop stochastic system will satisfy the corresponding dynamic chance constraints. In addition, we will also introduce a novel implementation using zonotopes to describe the tightening analytically. Finally, we will end with an example to illustrate the benefits of the developed approach to stochastic MPC with dynamic chance constraints.Comment: 8 pages, 3 figure

    Stochastic Model Predictive Control Using Simplified Affine Disturbance Feedback for Chance-Constrained Systems

    Get PDF
    This letter covers the model predictive control of linear discrete-time systems subject to stochastic additive disturbances and chance constraints on their state and control input. We propose a simplified control parameterization under the framework of affine disturbance feedback, and we show that our method is equivalent to parameterization over the family of state feedback policies. Using our method, associated finite-horizon optimization can be computed efficiently, with a slight increase in conservativeness compared with conventional affine disturbance feedback parameterization

    Data-driven stochastic model predictive control

    Full text link
    We propose a novel data-driven stochastic model predictive control (MPC) algorithm to control linear time-invariant systems with additive stochastic disturbances in the dynamics. The scheme centers around repeated predictions and computations of optimal control inputs based on a non-parametric representation of the space of all possible trajectories, using the fundamental lemma from behavioral systems theory. This representation is based on a single measured input-state-disturbance trajectory generated by persistently exciting inputs and does not require any further identification step. Based on stochastic MPC ideas, we enforce the satisfaction of state constraints with a pre-specified probability level, allowing for a systematic trade-off between control performance and constraint satisfaction. The proposed data-driven stochastic MPC algorithm enables efficient control where robust methods are too conservative, which we demonstrate in a simulation example.Comment: This work has been submitted to the L4DC 2022 conferenc

    A stochastic MPC scheme for distributed systems with multiplicative uncertainty

    Full text link
    This paper presents a novel Distributed Stochastic Model Predictive Control algorithm for networks of linear systems with multiplicative uncertainties and local chance constraints on the states and control inputs. The chance constraints are approximated via the Cantelli-Chebyshev inequality by means of expected value and covariance. The algorithm is based on the distributed Alternating Direction Method of Multipliers and yields in a distributedly implementable, recursive feasible and mean square stable control scheme. The aforementioned properties are guaranteed through a distributed invariant set and distributed terminal constraints for the mean and covariance. The paper closes with an illustrative numerical example for a system with three interconnected subsystems, where the distributed design procedure is benchmarked with a centralized approach.Comment: 10 pages, 2 figure

    Data-driven Economic NMPC using Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a powerful tool to perform data-driven optimal control without relying on a model of the system. However, RL struggles to provide hard guarantees on the behavior of the resulting control scheme. In contrast, Nonlinear Model Predictive Control (NMPC) and Economic NMPC (ENMPC) are standard tools for the closed-loop optimal control of complex systems with constraints and limitations, and benefit from a rich theory to assess their closed-loop behavior. Unfortunately, the performance of (E)NMPC hinges on the quality of the model underlying the control scheme. In this paper, we show that an (E)NMPC scheme can be tuned to deliver the optimal policy of the real system even when using a wrong model. This result also holds for real systems having stochastic dynamics. This entails that ENMPC can be used as a new type of function approximator within RL. Furthermore, we investigate our results in the context of ENMPC and formally connect them to the concept of dissipativity, which is central for the ENMPC stability. Finally, we detail how these results can be used to deploy classic RL tools for tuning (E)NMPC schemes. We apply these tools on both a classical linear MPC setting and a standard nonlinear example from the ENMPC literature
    corecore