2,828 research outputs found
Learning-based predictive control for linear systems: a unitary approach
A comprehensive approach addressing identification and control for
learningbased Model Predictive Control (MPC) for linear systems is presented.
The design technique yields a data-driven MPC law, based on a dataset collected
from the working plant. The method is indirect, i.e. it relies on a model
learning phase and a model-based control design one, devised in an integrated
manner. In the model learning phase, a twofold outcome is achieved: first,
different optimal p-steps ahead prediction models are obtained, to be used in
the MPC cost function; secondly, a perturbed state-space model is derived, to
be used for robust constraint satisfaction. Resorting to Set Membership
techniques, a characterization of the bounded model uncertainties is obtained,
which is a key feature for a successful application of the robust control
algorithm. In the control design phase, a robust MPC law is proposed, able to
track piece-wise constant reference signals, with guaranteed recursive
feasibility and convergence properties. The controller embeds multistep
predictors in the cost function, it ensures robust constraints satisfaction
thanks to the learnt uncertainty model, and it can deal with possibly
unfeasible reference values. The proposed approach is finally tested in a
numerical example
Data-driven stochastic model predictive control
We propose a novel data-driven stochastic model predictive control (MPC)
algorithm to control linear time-invariant systems with additive stochastic
disturbances in the dynamics. The scheme centers around repeated predictions
and computations of optimal control inputs based on a non-parametric
representation of the space of all possible trajectories, using the fundamental
lemma from behavioral systems theory. This representation is based on a single
measured input-state-disturbance trajectory generated by persistently exciting
inputs and does not require any further identification step. Based on
stochastic MPC ideas, we enforce the satisfaction of state constraints with a
pre-specified probability level, allowing for a systematic trade-off between
control performance and constraint satisfaction. The proposed data-driven
stochastic MPC algorithm enables efficient control where robust methods are too
conservative, which we demonstrate in a simulation example.Comment: This work has been submitted to the L4DC 2022 conferenc
Towards Robust and High-performance Operations of Wave Energy Converters: an Adaptive Tube-based Model Predictive Control Approach
Model predictive control (MPC) is an effective control method to improve the energy conversion efficiency of wave energy converters (WECs). However, the current developed WEC MPC has not reached commercial viability since the control performance is significantly dependent on the WEC model fidelity. To overcome the plant-model mismatch issue in the WEC MPC control problem, this paper proposes a robust tube-based MPC method to bound plant states within disturbance invariant sets centered around the noise-free model trajectory. The invariant sets are also utilized for tightening the nominal model's constraints that robustly enable constraint satisfaction. Yet overly conservative invariant sets can narrow the feasible region of the states and control inputs, and hence a data-driven quantile recurrent neural network (QRNN) is proposed in this work to form a learning-based adaptive tube with reduced conservatism by quantifying WEC model uncertainties. The theoretical root is that time-dependent historical data can offer valuable insight into the future behaviour of uncertainties. Numerical simulations have validated that the proposed method can improve the energy capture rate compared to the TMPC approach, by synthesizing the QRNN-based tube with MPC
Learning Robustness with Bounded Failure: An Iterative MPC Approach
We propose an approach to design a Model Predictive Controller (MPC) for
constrained Linear Time Invariant systems performing an iterative task. The
system is subject to an additive disturbance, and the goal is to learn to
satisfy state and input constraints robustly. Using disturbance measurements
after each iteration, we construct Confidence Support sets, which contain the
true support of the disturbance distribution with a given probability. As more
data is collected, the Confidence Supports converge to the true support of the
disturbance. This enables design of an MPC controller that avoids conservative
estimate of the disturbance support, while simultaneously bounding the
probability of constraint violation. The efficacy of the proposed approach is
then demonstrated with a detailed numerical example.Comment: Added GitHub link to all source code
Approximate solution of stochastic infinite horizon optimal control problems for constrained linear uncertain systems
We propose a Model Predictive Control (MPC) with a single-step prediction
horizon to solve infinite horizon optimal control problems with the expected
sum of convex stage costs for constrained linear uncertain systems. The
proposed method relies on two techniques. First, we estimate the expected
values of the convex costs using a computationally tractable approximation,
achieved by sampling across the space of disturbances. Second, we implement a
data-driven approach to approximate the optimal value function and its
corresponding domain, through systematic exploration of the system's state
space. These estimates are subsequently used as the terminal cost and terminal
set within the proposed MPC. We prove recursive feasibility, robust constraint
satisfaction, and convergence in probability to the target set. Furthermore, we
prove that the estimated value function converges to the optimal value function
in a local region. The effectiveness of the proposed MPC is illustrated with
detailed numerical simulations and comparisons with a value iteration method
and a Learning MPC that minimizes a certainty equivalent cost.Comment: Submitted to the IEEE Transactions on Automatic Contro
Safety Filter Design for Neural Network Systems via Convex Optimization
With the increase in data availability, it has been widely demonstrated that
neural networks (NN) can capture complex system dynamics precisely in a
data-driven manner. However, the architectural complexity and nonlinearity of
the NNs make it challenging to synthesize a provably safe controller. In this
work, we propose a novel safety filter that relies on convex optimization to
ensure safety for a NN system, subject to additive disturbances that are
capable of capturing modeling errors. Our approach leverages tools from NN
verification to over-approximate NN dynamics with a set of linear bounds,
followed by an application of robust linear MPC to search for controllers that
can guarantee robust constraint satisfaction. We demonstrate the efficacy of
the proposed framework numerically on a nonlinear pendulum system.Comment: This paper has been accepted to the 2023 62nd IEEE Conference on
Decision and Control (CDC
- …