68,631 research outputs found

    Guaranteeing Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions

    Get PDF
    Modern nonlinear control theory seeks to develop feedback controllers that endow systems with properties such as safety and stability. The guarantees ensured by these controllers often rely on accurate estimates of the system state for determining control actions. In practice, measurement model uncertainty can lead to error in state estimates that degrades these guarantees. In this paper, we seek to unify techniques from control theory and machine learning to synthesize controllers that achieve safety in the presence of measurement model uncertainty. We define the notion of a Measurement-Robust Control Barrier Function (MR-CBF) as a tool for determining safe control inputs when facing measurement model uncertainty. Furthermore, MR-CBFs are used to inform sampling methodologies for learning-based perception systems and quantify tolerable error in the resulting learned models. We demonstrate the efficacy of MR-CBFs in achieving safety with measurement model uncertainty on a simulated Segway system

    Safe Planning And Control Of Autonomous Systems: Robust Predictive Algorithms

    Get PDF
    Safe autonomous operation of dynamical systems has become one of the most important research problems. Algorithms for planning and control of such systems are now nding place on production vehicles, and are fast becoming ubiquitous on the roads and air-spaces. However most algorithms for such operations, that provide guarantees, either do not scale well or rely on over-simplifying abstractions that make them impractical for real world implementations. On the other hand, the algorithms that are computationally tractable and amenable to implementation generally lack any guarantees on their behavior. In this work, we aim to bridge the gap between provable and scalable planning and control for dynamical systems. The research covered herein can be broadly categorized into: i) multi-agent planning with temporal logic specications, and ii) robust predictive control that takes into account the performance of the perception algorithms used to process information for control. In the rst part, we focus on multi-robot systems with complicated mission requirements, and develop a planning algorithm that can take into account a) spatial, b) temporal and c) reactive mission requirements across multiple robots. The algorithm not only guarantees continuous time satisfaction of the mission requirements, but also that the generated trajectories can be followed by the robot. The other part develops a robust, predictive control algorithm to control the the dynamical system to follow the trajectories generated by the rst part, within some desired bounds. This relies on a contract-based framework wherein the control algorithm controls the dynamical system as well as a resource/quality trade-o in a perception-based state estimation algorithm. We show that this predictive algorithm remains feasible with respect to constraints while following a desired trajectory, and also stabilizes the dynamical system under control. Through simulations, as well as experiments on actual robotic systems, we show that the planning method is computationally ecient as well as scales better than other state-of-the art algorithms that use similar formal specications. We also show that the robust control algorithm provides better control performance, and is also computationally more ecient than similar algorithms that do not leverage the resource/ quality trade-o of the perception-based state estimato

    Compositional Verification for Autonomous Systems with Deep Learning Components

    Full text link
    As autonomy becomes prevalent in many applications, ranging from recommendation systems to fully autonomous vehicles, there is an increased need to provide safety guarantees for such systems. The problem is difficult, as these are large, complex systems which operate in uncertain environments, requiring data-driven machine-learning components. However, learning techniques such as Deep Neural Networks, widely used today, are inherently unpredictable and lack the theoretical foundations to provide strong assurance guarantees. We present a compositional approach for the scalable, formal verification of autonomous systems that contain Deep Neural Network components. The approach uses assume-guarantee reasoning whereby {\em contracts}, encoding the input-output behavior of individual components, allow the designer to model and incorporate the behavior of the learning-enabled components working side-by-side with the other components. We illustrate the approach on an example taken from the autonomous vehicles domain
    • …
    corecore