4 research outputs found
Formal Synthesis of Control Strategies for Positive Monotone Systems
We design controllers from formal specifications for positive discrete-time
monotone systems that are subject to bounded disturbances. Such systems are
widely used to model the dynamics of transportation and biological networks.
The specifications are described using signal temporal logic (STL), which can
express a broad range of temporal properties. We formulate the problem as a
mixed-integer linear program (MILP) and show that under the assumptions made in
this paper, which are not restrictive for traffic applications, the existence
of open-loop control policies is sufficient and almost necessary to ensure the
satisfaction of STL formulas. We establish a relation between satisfaction of
STL formulas in infinite time and set-invariance theories and provide an
efficient method to compute robust control invariant sets in high dimensions.
We also develop a robust model predictive framework to plan controls optimally
while ensuring the satisfaction of the specification. Illustrative examples and
a traffic management case study are included.Comment: To appear in IEEE Transactions on Automatic Control (TAC) (2018), 16
pages, double colum
Formal Estimation of Collision Risks for Autonomous Vehicles: A Compositional Data-Driven Approach
In this work, we propose a compositional data-driven approach for the formal
estimation of collision risks for autonomous vehicles (AVs) while acting in a
stochastic multi-agent framework. The proposed approach is based on the
construction of sub-barrier certificates for each stochastic agent via a set of
data collected from its trajectories while providing an a-priori guaranteed
confidence on the data-driven estimation. In our proposed setting, we first
cast the original collision risk problem for each agent as a robust
optimization program (ROP). Solving the acquired ROP is not tractable due to an
unknown model that appears in one of its constraints. To tackle this
difficulty, we collect finite numbers of data from trajectories of each agent
and provide a scenario optimization program (SOP) corresponding to the original
ROP. We then establish a probabilistic bridge between the optimal value of SOP
and that of ROP, and accordingly, we formally construct the sub-barrier
certificate for each unknown agent based on the number of data and a required
level of confidence. We then propose a compositional technique based on
small-gain reasoning to quantify the collision risk for multi-agent AVs with
some desirable confidence based on sub-barrier certificates of individual
agents constructed from data. For the case that the proposed compositionality
conditions are not satisfied, we provide a relaxed version of compositional
results without requiring any compositionality conditions but at the cost of
providing a potentially conservative collision risk. Eventually, we also
present our approaches for non-stochastic multi-agent AVs. We demonstrate the
effectiveness of our proposed results by applying them to a vehicle platooning
consisting of 100 vehicles with 1 leader and 99 followers. We formally estimate
the collision risk by collecting data from trajectories of each agent.Comment: This work has been accepted at IEEE Transactions on Control of
Network System
A Behavioral Approach to Robust Machine Learning
Machine learning is revolutionizing almost all fields of science and technology and has been proposed as a pathway to solving many previously intractable problems such as autonomous driving and other complex robotics tasks. While the field has demonstrated impressive results on certain problems, many of these results have not translated to applications in physical systems, partly due to the cost of system fail-
ure and partly due to the difficulty of ensuring reliable and robust model behavior. Deep neural networks, for instance, have simultaneously demonstrated both incredible performance in game playing and image processing, and remarkable fragility. This combination of high average performance and a catastrophically bad worst case performance presents a serious danger as deep neural networks are currently being
used in safety critical tasks such as assisted driving.
In this thesis, we propose a new approach to training models that have built in robustness guarantees. Our approach to ensuring stability and robustness of the models trained is distinct from prior methods; where prior methods learn a model and then attempt to verify robustness/stability, we directly optimize over sets of
models where the necessary properties are known to hold.
Specifically, we apply methods from robust and nonlinear control to the analysis and synthesis of recurrent neural networks, equilibrium neural networks, and recurrent equilibrium neural networks. The techniques developed allow us to enforce properties such as incremental stability, incremental passivity, and incremental l2 gain bounds / Lipschitz bounds. A central consideration in the development of our model sets is the difficulty of fitting models. All models can be placed in the image of a convex set, or even R^N , allowing useful properties to be easily imposed during the training procedure via simple interior point methods, penalty methods, or unconstrained optimization.
In the final chapter, we study the problem of learning networks of interacting models with guarantees that the resulting networked system is stable and/or monotone, i.e., the order relations between states are preserved. While our approach to learning in this chapter is similar to the previous chapters, the model set that we propose has a separable structure that allows for the scalable and distributed identification of large-scale systems via the alternating directions method of multipliers (ADMM)