15 research outputs found

    Towards Structured Evaluation of Deep Neural Network Supervisors

    Full text link
    Deep Neural Networks (DNN) have improved the quality of several non-safety related products in the past years. However, before DNNs should be deployed to safety-critical applications, their robustness needs to be systematically analyzed. A common challenge for DNNs occurs when input is dissimilar to the training set, which might lead to high confidence predictions despite proper knowledge of the input. Several previous studies have proposed to complement DNNs with a supervisor that detects when inputs are outside the scope of the network. Most of these supervisors, however, are developed and tested for a selected scenario using a specific performance metric. In this work, we emphasize the need to assess and compare the performance of supervisors in a structured way. We present a framework constituted by four datasets organized in six test cases combined with seven evaluation metrics. The test cases provide varying complexity and include data from publicly available sources as well as a novel dataset consisting of images from simulated driving scenarios. The latter we plan to make publicly available. Our framework can be used to support DNN supervisor evaluation, which in turn could be used to motive development, validation, and deployment of DNNs in safety-critical applications.Comment: Preprint of paper accepted for presentation at The First IEEE International Conference on Artificial Intelligence Testing, April 4-9, 2019, San Francisco East Bay, California, US

    Understanding the Impact of Edge Cases from Occluded Pedestrians for ML Systems

    No full text
    Machine learning (ML)-enabled approaches are considered a substantial support technique of detection and classification of obstacles of traffic participants in self-driving vehicles. Major breakthroughs have been demonstrated the past few years, even covering complete end-to-end data processing chain from sensory inputs through perception and planning to vehicle control of acceleration, breaking and steering. YOLO (you-only-look-once) is a state-of-the-art perception neural network (NN) architecture providing object detection and classification through bounding box estimations on camera images. As the NN is trained on well annotated images, in this paper we study the variations of confidence levels from the NN when tested on hand-crafted occlusion added to a test set. We compare regular pedestrian detection to upper and lower body detection. Our findings show that the two NN using only partial information perform similarly well like the NN for the full body when the full body NN’s performance is 0.75 or better. Furthermore and as expected, the network, which is only trained on the lower half body is least prone to disturbances from occlusions of the upper half and vice versa

    Towards Safety Analysis of Interactions Between Human Users and Automated Driving Systems

    No full text
    International audienceOne of the major challenges of designing automated driving systems (ADS) is showing that they are safe. This includes safety analysis of interactions between humans and the ADS, a multidisciplinary task involving functional safety and human factors expertise. In this paper, we lay the foundation for a safety analysis method for these interactions, which builds upon combining human factors knowledge with known techniques from the functional safety domain. The aim of the proposed method is finding safety issues in proposed HMI protocols. It combines constructing interaction sequences between human and ADS as a variant of sequence diagrams , and use these sequences as input to a cause-consequence analysis with the purpose of finding potential interaction faults that may lead to dangerous failures. Based on a this analysis, the HMI design can be improved to reduce safety risks, and the analysis results can also be used as part of the ADS safety case

    Towards Safety Analysis of Interactions BetweenHuman Users and Automated Driving Systems

    No full text
    One of the major challenges of designing automateddriving systems (ADS) is showing that they are safe. This includes safety analysis of interactions between humans and the ADS, amulti-disciplinary task involving functional safety and human factors expertise. In this paper, we lay the foundation for a safety analysis method for these interactions, which builds upon combining human factors knowledge with known techniques from the functional safety domain. The aim of the proposed method is finding safety issues in proposed HMI protocols. It combines constructing interaction sequences between human and ADS as a variant of sequence diagrams,and use these sequences as input to a cause-consequence analysis with the purpose of finding potential interaction faults that may lead to dangerous failures. Based on a this analysis,the HMI design can be improved to reduce safety risks, and the analysis results can also be used as part of the ADS safety case.ESPLANAD

    Evaluation of Out-of-Distribution Detection Performance on Autonomous Driving Datasets

    No full text
    Safety measures need to be systemically investigated to what extent they evaluate the intended performance of Deep Neural Networks (DNNs) for critical applications. Due to a lack of verification methods for high-dimensional DNNs, a trade-off is needed between accepted performance and handling of out-of-distribution (OOD) samples.This work evaluates rejecting outputs from semantic segmentation DNNs by applying a Mahalanobis distance (MD) based on the most probable class-conditional Gaussian distribution for the predicted class as an OOD score. The evaluation follows three DNNs trained on the Cityscapes dataset and tested on four automotive datasets and finds that classification risk can drastically be reduced at the cost of pixel coverage, even when applied on unseen datasets. The applicability of our findings will support legitimizing safety measures and motivate their usage when arguing for safe usage of DNNs in automotive perception

    Towards Safety Analysis of Interactions BetweenHuman Users and Automated Driving Systems

    No full text
    One of the major challenges of designing automateddriving systems (ADS) is showing that they are safe. This includes safety analysis of interactions between humans and the ADS, amulti-disciplinary task involving functional safety and human factors expertise. In this paper, we lay the foundation for a safety analysis method for these interactions, which builds upon combining human factors knowledge with known techniques from the functional safety domain. The aim of the proposed method is finding safety issues in proposed HMI protocols. It combines constructing interaction sequences between human and ADS as a variant of sequence diagrams,and use these sequences as input to a cause-consequence analysis with the purpose of finding potential interaction faults that may lead to dangerous failures. Based on a this analysis,the HMI design can be improved to reduce safety risks, and the analysis results can also be used as part of the ADS safety case.ESPLANAD

    Minimal Risk Condition for Safety Assurance of Automated Driving Systems

    No full text
    We have yet to see wide deployment of automated driving systems (ADSs) on public roads. One of the reasons is the challenge of ensuring the systems' safety. The operational design domain (ODD) can be used to confine the scope of the ADS and subsequently also its safety case. For this to be valid the ADS needs to have strategies to remain in the ODD throughout its operations. In this paper we discuss the role of the minimal risk condition (MRC) as a means to ensure this. Further, we elaborate on the need for hierarchies of MRCs to cope with diverse system degradations during operations.QC 20210916</p

    Minimal Risk Condition for Safety Assurance of Automated Driving Systems

    No full text
    We have yet to see wide deployment of automated driving systems (ADSs) on public roads. One of the reasons is the challenge of ensuring the systems’ safety. The operational design domain (ODD) can be used to confine the scope of the ADS and subsequently also its safety case. For this to be valid the ADS needs to have strategies to remain in the ODD throughout its operations. In this paper we discuss the role of the minimal risk condition (MRC) as a means to ensure this. Further, we elaborate on the need for hierarchies of MRCs to cope with diverse system degradations during operations

    Managing Continuous Assurance of Complex Dependable Systems : Report from a workshop held at the Scandinavian Conference on System and Software Safety (SCSSS) 2022.

    No full text
    The SALIENCE4CAV project has done work on enabling continuous assurance, which aims to ensure safety is maintained throughout the entire lifecycle of a product, system, or service. One key technique is the use of safety contracts and modular assurance cases for systematically managing safety responsibilities and requirements across different stakeholders. This report summarizes outcomes from a workshop where discussions were held around this work. The participants were predominantly working in domains with high dependability requirements, such as automotive. Knowledge, tools, and organizational issues are seen as some key obstacles, but interest is high, and the community realizes the need for enabling continuous assurance
    corecore