5,923 research outputs found

    DEPENDABLE NEURAL NETWORKS FOR SAFETY CRITICAL TASKS

    Get PDF
    Neural Networks (NNs) have demonstrated impressive performance improvement over the last decade in safety critical tasks, e.g., perception for autonomous vehicles, medical image analysis, etc., but, NNs performing safety critical tasks poses a risk for harm, as NN performance often degrades when the operating domain changes. Previous work has proposed new training paradigms to improve NN generalization to new operating domains but fails to predict what the NN performance in the new operating domain will be. In addition, performance metrics in Machine Learning (ML) focus on the average probability of success but do not differentiate failures that cause harm from those that do not. In this thesis, we leverage structure in NN behavior based on the environment context and the NN embedding to predict NN performance for safety critical tasks in unconstrained environments. We denote factors relating to the environment context as context features. First, we define performance metrics that capture both the probability of task success and the probability of causing harm. We then address the task of predicting NN performance in a novel operating domain as Network Generalization Prediction (NGP), and we derive a NGP algorithm from a finite test set using known context features. Second, we extend our NGP algorithm to identify which context features impact NN performance from a set of observed context features, where it is not known a priori what features are important. Third, we map structure in the NN embedding space that is informative about NN performance and derive a NGP algorithm based on how unlabeled novel operating domain images map into the embedding space. Fourth, we investigate safety functions for NNs. Safety functions are standard practice in functional safety where an external function is added to a process, e.g., a chemical reaction, to improve the overall safety. We introduce the concept of safety functions for NNs and show that external logic around NNs can improve the safety for a robot control task and image classification tasks. We demonstrate these methods on pertinent real-world tasks using state-of-the-art NNs, e.g., DenseNet for melanoma classification and FasterRCNN for pedestrian detection

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Validation of a software dependability tool via fault injection experiments

    Get PDF
    Presents the validation of the strategies employed in the RECCO tool to analyze a C/C++ software; the RECCO compiler scans C/C++ source code to extract information about the significance of the variables that populate the program and the code structure itself. Experimental results gathered on an Open Source Router are used to compare and correlate two sets of critical variables, one obtained by fault injection experiments, and the other applying the RECCO tool, respectively. Then the two sets are analyzed, compared, and correlated to prove the effectiveness of RECCO's methodology
    • …
    corecore