520 research outputs found

    Neural Network Repair with Reachability Analysis

    Get PDF
    Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled cyber-physical systems (Le-CPS) under adversarial attacks, model uncertainties, and sensing errors is essential for safe autonomy. This research proposes a framework to repair unsafe DNNs in safety-critical systems with reachability analysis. The repair process is inspired by adversarial training which has demonstrated high effectiveness in improving the safety and robustness of DNNs. Different from traditional adversarial training approaches where adversarial examples are utilized from random attacks and may not be representative of all unsafe behaviors, our repair process uses reachability analysis to compute the exact unsafe regions and identify sufficiently representative examples to enhance the efficacy and efficiency of the adversarial training. The performance of our repair framework is evaluated on two types of benchmarks without safe models as references. One is a DNN controller for aircraft collision avoidance with access to training data. The other is a rocket lander where our framework can be seamlessly integrated with the well-known deep deterministic policy gradient (DDPG) reinforcement learning algorithm. The experimental results show that our framework can successfully repair all instances on multiple safety specifications with negligible performance degradation. In addition, to increase the computational and memory efficiency of the reachability analysis algorithm in the framework, we propose a depth-first-search algorithm that combines an existing exact analysis method with an over-approximation approach based on a new set representation. Experimental results show that our method achieves a five-fold improvement in runtime and a two-fold improvement in memory usage compared to exact analysis

    Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool

    Full text link
    This work in progress paper introduces robustness verification for autoencoder-based regression neural network (NN) models, following state-of-the-art approaches for robustness verification of image classification NNs. Despite the ongoing progress in developing verification methods for safety and robustness in various deep neural networks (DNNs), robustness checking of autoencoder models has not yet been considered. We explore this open space of research and check ways to bridge the gap between existing DNN verification methods by extending existing robustness analysis methods for such autoencoder networks. While classification models using autoencoders work more or less similar to image classification NNs, the functionality of regression models is distinctly different. We introduce two definitions of robustness evaluation metrics for autoencoder-based regression models, specifically the percentage robustness and un-robustness grade. We also modified the existing Imagestar approach, adjusting the variables to take care of the specific input types for regression networks. The approach is implemented as an extension of NNV, then applied and evaluated on a dataset, with a case study experiment shown using the same dataset. As per the authors' understanding, this work in progress paper is the first to show possible reachability analysis of autoencoder-based NNs.Comment: In Proceedings SNR 2021, arXiv:2207.0439

    Reachability Analysis and Safety Verification of Neural Feedback Systems via Hybrid Zonotopes

    Full text link
    Hybrid zonotopes generalize constrained zonotopes by introducing additional binary variables and possess some unique properties that make them convenient to represent nonconvex sets. This paper presents novel hybrid zonotope-based methods for the reachability analysis and safety verification of neural feedback systems. Algorithms are proposed to compute the input-output relationship of each layer of a feedforward neural network, as well as the exact reachable sets of neural feedback systems. In addition, a sufficient and necessary condition is formulated as a mixed-integer linear program to certify whether the trajectories of a neural feedback system can avoid unsafe regions. The proposed approach is shown to yield a formulation that provides the tightest convex relaxation for the reachable sets of the neural feedback system. Complexity reduction techniques for the reachable sets are developed to balance the computation efficiency and approximation accuracy. Two numerical examples demonstrate the superior performance of the proposed approach compared to other existing methods.Comment: 8 pages, 4 figure
    • …
    corecore