407 research outputs found

    Stochastic MPC with Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints

    Full text link
    This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law incorporating a dynamic feedback gain to minimise a quadratic cost function subject to a single chance constraint. The feedback gain is selected from a set of candidates generated by solutions of multiobjective optimisation problems solved by Dynamic Programming (DP). We provide two methods for gain selection based on minimising upper bounds on predicted costs. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and ignoring violation probabilities in the far future, this form of constraint allows for an MPC law with guarantees of recursive feasibility without an assumption of boundedness of the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition. With dynamic feedback gain selection, the conservativeness of Chebyshev's inequality is mitigated and closed loop cost is reduced with a larger set of feasible initial conditions. A numerical example is given to show these properties.Comment: 14 page

    Safe model-based design of experiments using Gaussian processes

    Get PDF
    The construction of kinetic models has become an indispensable step in developing and scale-up of processes in the industry. Model-based design of experiments (MBDoE) has been widely used to improve parameter precision in nonlinear dynamic systems. Such a framework needs to account for both parametric and structural uncertainty, as the physical or safety constraints imposed on the system may well turn out to be violated, leading to unsafe experimental conditions when an optimally designed experiment is performed. In this work, Gaussian processes are utilized in a two-fold manner: 1) to quantify the uncertainty realization of the physical system and calculate the plant-model mismatch, 2) to compute the optimal experimental design while accounting for the parametric uncertainty. TheOur proposed method, Gaussian process-based MBDoE (GP-MBDoE), guarantees the probabilistic satisfaction of the constraints in the context of the model-based design of experiments. GP-MBDoE is assisted with the use of adaptive trust regions to facilitate a satisfactory local approximation. The proposed method can allow the design of optimal experiments starting from limited preliminary knowledge of the parameter set, leading to a safe exploration of the parameter space. This method’s performance is demonstrated through illustrative case studies regarding the parameter identification of kinetic models in flow reactors

    Constrained Model-Free Reinforcement Learning for Process Optimization

    Full text link
    Reinforcement learning (RL) is a control approach that can handle nonlinear stochastic optimal control problems. However, despite the promise exhibited, RL has yet to see marked translation to industrial practice primarily due to its inability to satisfy state constraints. In this work we aim to address this challenge. We propose an 'oracle'-assisted constrained Q-learning algorithm that guarantees the satisfaction of joint chance constraints with a high probability, which is crucial for safety critical tasks. To achieve this, constraint tightening (backoffs) are introduced and adjusted using Broyden's method, hence making them self-tuned. This results in a general methodology that can be imbued into approximate dynamic programming-based algorithms to ensure constraint satisfaction with high probability. Finally, we present case studies that analyze the performance of the proposed approach and compare this algorithm with model predictive control (MPC). The favorable performance of this algorithm signifies a step toward the incorporation of RL into real world optimization and control of engineering systems, where constraints are essential in ensuring safety
    • …
    corecore