60,805 research outputs found

    On the convergence of stochastic MPC to terminal modes of operation

    Full text link
    The stability of stochastic Model Predictive Control (MPC) subject to additive disturbances is often demonstrated in the literature by constructing Lyapunov-like inequalities that guarantee closed-loop performance bounds and boundedness of the state, but convergence to a terminal control law is typically not shown. In this work we use results on general state space Markov chains to find conditions that guarantee convergence of disturbed nonlinear systems to terminal modes of operation, so that they converge in probability to a priori known terminal linear feedback laws and achieve time-average performance equal to that of the terminal control law. We discuss implications for the convergence of control laws in stochastic MPC formulations, in particular we prove convergence for two formulations of stochastic MPC

    Stability for Receding-horizon Stochastic Model Predictive Control

    Full text link
    A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.Comment: American Control Conference (ACC) 201

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    Stability and Performance Verification of Optimization-based Controllers

    Get PDF
    This paper presents a method to verify closed-loop properties of optimization-based controllers for deterministic and stochastic constrained polynomial discrete-time dynamical systems. The closed-loop properties amenable to the proposed technique include global and local stability, performance with respect to a given cost function (both in a deterministic and stochastic setting) and the L2\mathcal{L}_2 gain. The method applies to a wide range of practical control problems: For instance, a dynamical controller (e.g., a PID) plus input saturation, model predictive control with state estimation, inexact model and soft constraints, or a general optimization-based controller where the underlying problem is solved with a fixed number of iterations of a first-order method are all amenable to the proposed approach. The approach is based on the observation that the control input generated by an optimization-based controller satisfies the associated Karush-Kuhn-Tucker (KKT) conditions which, provided all data is polynomial, are a system of polynomial equalities and inequalities. The closed-loop properties can then be analyzed using sum-of-squares (SOS) programming

    Stability for Receding-horizon Stochastic Model Predictive Control

    Get PDF
    A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.Comment: American Control Conference (ACC) 201

    Adaptive PD Control using Deep Reinforcement Learning for Local-Remote Teleoperation with Stochastic Time Delays

    Full text link
    Local-remote systems allow robots to execute complex tasks in hazardous environments such as space and nuclear power stations. However, establishing accurate positional mapping between local and remote devices can be difficult due to time delays that can compromise system performance and stability. Enhancing the synchronicity and stability of local-remote systems is vital for enabling robots to interact with environments at greater distances and under highly challenging network conditions, including time delays. We introduce an adaptive control method employing reinforcement learning to tackle the time-delayed control problem. By adjusting controller parameters in real-time, this adaptive controller compensates for stochastic delays and improves synchronicity between local and remote robotic manipulators. To improve the adaptive PD controller's performance, we devise a model-based reinforcement learning approach that effectively incorporates multi-step delays into the learning framework. Utilizing this proposed technique, the local-remote system's performance is stabilized for stochastic communication time-delays of up to 290ms. Our results demonstrate that the suggested model-based reinforcement learning method surpasses the Soft-Actor Critic and augmented state Soft-Actor Critic techniques. Access the code at: https://github.com/CAV-Research-Lab/Predictive-Model-Delay-CorrectionComment: 7 pages + 1 references, 4 figure
    • …
    corecore