6 research outputs found
An Improved Constraint-Tightening Approach for Stochastic MPC
The problem of achieving a good trade-off in Stochastic Model Predictive
Control between the competing goals of improving the average performance and
reducing conservativeness, while still guaranteeing recursive feasibility and
low computational complexity, is addressed. We propose a novel, less
restrictive scheme which is based on considering stability and recursive
feasibility separately. Through an explicit first step constraint we guarantee
recursive feasibility. In particular we guarantee the existence of a feasible
input trajectory at each time instant, but we only require that the input
sequence computed at time remains feasible at time for most
disturbances but not necessarily for all, which suffices for stability. To
overcome the computational complexity of probabilistic constraints, we propose
an offline constraint-tightening procedure, which can be efficiently solved via
a sampling approach to the desired accuracy. The online computational
complexity of the resulting Model Predictive Control (MPC) algorithm is similar
to that of a nominal MPC with terminal region. A numerical example, which
provides a comparison with classical, recursively feasible Stochastic MPC and
Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201
Stochastic Model Predictive Control with Discounted Probabilistic Constraints
This paper considers linear discrete-time systems with additive disturbances,
and designs a Model Predictive Control (MPC) law to minimise a quadratic cost
function subject to a chance constraint. The chance constraint is defined as a
discounted sum of violation probabilities on an infinite horizon. By penalising
violation probabilities close to the initial time and ignoring violation
probabilities in the far future, this form of constraint enables the
feasibility of the online optimisation to be guaranteed without an assumption
of boundedness of the disturbance. A computationally convenient MPC
optimisation problem is formulated using Chebyshev's inequality and we
introduce an online constraint-tightening technique to ensure recursive
feasibility based on knowledge of a suboptimal solution. The closed loop system
is guaranteed to satisfy the chance constraint and a quadratic stability
condition.Comment: 6 pages, Conference Proceeding
Game-Theoretic and Set-Based Methods for Safe Autonomous Vehicles on Shared Roads
Autonomous vehicle (AV) technology promises safer, cleaner, and more efficient transportation, as well as improved mobility for the young, elderly, and disabled. One of the biggest challenges of AV technology is the development and high-confidence verification and validation (V&V) of decision and control systems for AVs to safely and effectively operate on roads shared with other road users (including human-driven vehicles). This dissertation investigates game-theoretic and set-based methods to address this challenge. Firstly, this dissertation presents two game-theoretic approaches to modeling the interactions among drivers/vehicles on shared roads. The first approach is based on the "level-k reasoning" human behavioral model and focuses on the representation of heterogeneous driving styles of real-world drivers. The second approach is based on a novel leader-follower game formulation inspired by the "right-of-way" traffic rules and focuses on the modeling of driver intents and their resulting behaviors under such traffic rules and etiquette. Both approaches lead to interpretable and scalable driver/vehicle interaction models. This dissertation then introduces an application of these models to fast and economical virtual V&V of AV control systems. Secondly, this dissertation presents a high-level control framework for AVs to safely and effectively interact with other road users. The framework is based on a constrained partially observable Markov decision process (POMDP) formulation of the AV control problem, which is then solved using a tailored model predictive control algorithm called POMDP-MPC. The major advantages of this control framework include its abilities to handle interaction uncertainties and provide an explicit probabilistic safety guarantee under such uncertainties. Finally, this dissertation introduces the Action Governor (AG), which is a novel add-on scheme to a nominal control loop for formally enforcing pointwise-in-time state and control constraints. The AG operates based on set-theoretic techniques and online optimization. Theoretical properties and computational approaches of the AG for discrete-time linear systems subject to non-convex exclusion-zone avoidance constraints are established. The use of the AG for enhancing AV safety is illustrated through relevant simulation case studies.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/167992/1/nanli_1.pd