1,479 research outputs found

    Safe Zero-Shot Model-Based Learning and Control: A Wasserstein Distributionally Robust Approach

    Full text link
    This paper explores distributionally robust zero-shot model-based learning and control using Wasserstein ambiguity sets. Conventional model-based reinforcement learning algorithms struggle to guarantee feasibility throughout the online learning process. We address this open challenge with the following approach. Using a stochastic model-predictive control (MPC) strategy, we augment safety constraints with affine random variables corresponding to the instantaneous empirical distributions of modeling error. We obtain these distributions by evaluating model residuals in real time throughout the online learning process. By optimizing over the worst case modeling error distribution defined within a Wasserstein ambiguity set centered about our empirical distributions, we can approach the nominal constraint boundary in a provably safe way. We validate the performance of our approach using a case study of lithium-ion battery fast charging, a relevant and safety-critical energy systems control application. Our results demonstrate marked improvements in safety compared to a basic learning model-predictive controller, with constraints satisfied at every instance during online learning and control.Comment: In review for CDC2

    A provably correct MPC approach to safety control of urban traffic networks

    Full text link
    Model predictive control (MPC) is a popular strategy for urban traffic management that is able to incorporate physical and user defined constraints. However, the current MPC methods rely on finite horizon predictions that are unable to guarantee desirable behaviors over long periods of time. In this paper we design an MPC strategy that is guaranteed to keep the evolution of a network in a desirable yet arbitrary -safe- set, while optimizing a finite horizon cost function. Our approach relies on finding a robust controlled invariant set inside the safe set that provides an appropriate terminal constraint for the MPC optimization problem. An illustrative example is included.This work was partially supported by the NSF under grants CPS-1446151 and CMMI-1400167. (CPS-1446151 - NSF; CMMI-1400167 - NSF

    SOTER: A Runtime Assurance Framework for Programming Safe Robotics Systems

    Full text link
    The recent drive towards achieving greater autonomy and intelligence in robotics has led to high levels of complexity. Autonomous robots increasingly depend on third party off-the-shelf components and complex machine-learning techniques. This trend makes it challenging to provide strong design-time certification of correct operation. To address these challenges, we present SOTER, a robotics programming framework with two key components: (1) a programming language for implementing and testing high-level reactive robotics software and (2) an integrated runtime assurance (RTA) system that helps enable the use of uncertified components, while still providing safety guarantees. SOTER provides language primitives to declaratively construct a RTA module consisting of an advanced, high-performance controller (uncertified), a safe, lower-performance controller (certified), and the desired safety specification. The framework provides a formal guarantee that a well-formed RTA module always satisfies the safety specification, without completely sacrificing performance by using higher performance uncertified components whenever safe. SOTER allows the complex robotics software stack to be constructed as a composition of RTA modules, where each uncertified component is protected using a RTA module. To demonstrate the efficacy of our framework, we consider a real-world case-study of building a safe drone surveillance system. Our experiments both in simulation and on actual drones show that the SOTER-enabled RTA ensures the safety of the system, including when untrusted third-party components have bugs or deviate from the desired behavior

    Verifiable Reinforcement Learning via Policy Extraction

    Full text link
    While deep reinforcement learning has successfully solved many challenging control tasks, its real-world applicability has been limited by the inability to ensure the safety of learned policies. We propose an approach to verifiable reinforcement learning by training decision tree policies, which can represent complex policies (since they are nonparametric), yet can be efficiently verified using existing techniques (since they are highly structured). The challenge is that decision tree policies are difficult to train. We propose VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy (called the oracle) and its Q-function, and show that it substantially outperforms two baselines. We use VIPER to (i) learn a provably robust decision tree policy for a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree policy for a toy game based on Pong that provably never loses, and (iii) learn a provably stable decision tree policy for cart-pole. In each case, the decision tree policy achieves performance equal to that of the original DNN policy
    • …
    corecore