13,874 research outputs found

    Identification of test cases for Automated Driving Systems using Bayesian optimization

    Get PDF
    With advancements in technology, the automotive industry is experiencing a paradigm shift from assisted driving to highly automated driving. However, autonomous driving systems are highly safety critical in nature and need to be thoroughly tested for a diverse set of conditions before being commercially deployed. Due to the huge complexities involved with Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS), traditional software testing methods have well-known limitations. They also fail to cover the infinite number of adverse conditions that can occur due to a slight change in the interactions between the environment and the system. Hence, it is important to identify test conditions that push the vehicle under test to breach its safe boundaries. Hazard Based Testing (HBT) methods, inspired by Systems-Theoretic Process Analysis (STPA), identify such parameterized test conditions that can lead to system failure. However, these techniques fall short of discovering the exact parameter values that lead to the failure condition. The presented paper proposes a test case identification technique using Bayesian Optimization. For a given test scenario, the proposed method learns parameter values by observing the system's output. The identified values create test cases that drive the system to violate its safe boundaries. STPA inspired outputs (parameters and pass/fail criteria) are used as inputs to the Bayesian Optimization model. The proposed method was applied to an SAE Level-4 Low Speed Automated Driving (LSAD) system which was modelled in a driving simulator

    Probabilistic Metamodels for an Efficient Characterization of Complex Driving Scenarios

    Full text link
    To validate the safety of automated vehicles (AV), scenario-based testing aims to systematically describe driving scenarios an AV might encounter. In this process, continuous inputs such as velocities result in an infinite number of possible variations of a scenario. Thus, metamodels are used to perform analyses or to select specific variations for examination. However, despite the safety criticality of AV testing, metamodels are usually seen as a part of an overall approach, and their predictions are not questioned. This paper analyzes the predictive performance of Gaussian processes (GP), deep Gaussian processes, extra-trees, and Bayesian neural networks (BNN), considering four scenarios with 5 to 20 inputs. Building on this, an iterative approach is introduced and evaluated, which allows to efficiently select test cases for common analysis tasks. The results show that regarding predictive performance, the appropriate selection of test cases is more important than the choice of metamodels. However, the choice of metamodels remains crucial: Their great flexibility allows BNNs to benefit from large amounts of data and to model even the most complex scenarios. In contrast, less flexible models like GPs convince with higher reliability. Hence, relevant test cases are best explored using scalable virtual test setups and flexible models. Subsequently, more realistic test setups and more reliable models can be used for targeted testing and validation.Comment: 10 pages, 14 figures, 1 table, associated dataset at https://github.com/wnklmx/DSIO

    SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization

    Full text link
    Computer vision is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effectively training these models, however, is not trivial due in part to hyperparameters: user-configured values that control a model's ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We first demonstrate that our framework achieves double the throughput of a standard distributed hyperparameter optimization framework by optimizing SVM for MNIST using 150 distributed workers. We then conduct model search with SHADHO over the course of one week using 74 GPUs across two compute clusters to optimize U-Net for a cell segmentation task, discovering 515 models that achieve a lower validation loss than standard U-Net.Comment: 10 pages, 6 figure
    • …
    corecore