14 research outputs found

    A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving

    Full text link
    3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic calibration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed

    Counterexample-Guided Data Augmentation

    Full text link
    We present a novel framework for augmenting data sets for machine learning based on counterexamples. Counterexamples are misclassified examples that have important properties for retraining and improving the model. Key components of our framework include a counterexample generator, which produces data items that are misclassified by the model and error tables, a novel data structure that stores information pertaining to misclassifications. Error tables can be used to explain the model's vulnerabilities and are used to efficiently generate counterexamples for augmentation. We show the efficacy of the proposed framework by comparing it to classical augmentation techniques on a case study of object detection in autonomous driving based on deep neural networks

    Compositional Falsification of Cyber-Physical Systems with Machine Learning Components

    Full text link
    Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks

    RMT: Rule-based Metamorphic Testing for Autonomous Driving Models

    Full text link
    Deep neural network models are widely used for perception and control in autonomous driving. Recent work uses metamorphic testing but is limited to using equality-based metamorphic relations and does not provide expressiveness for defining inequality-based metamorphic relations. To encode real world traffic rules, domain experts must be able to express higher order relations e.g., a vehicle should decrease speed in certain ratio, when there is a vehicle x meters ahead and compositionality e.g., a vehicle must have a larger deceleration, when there is a vehicle ahead and when the weather is rainy and proportional compounding effect to the test outcome. We design RMT, a declarative rule-based metamorphic testing framework. It provides three components that work in concert:(1) a domain specific language that enables an expert to express higher-order, compositional metamorphic relations, (2) pluggable transformation engines built on a variety of image and graphics processing techniques, and (3) automated test generation that translates a human-written rule to a corresponding executable, metamorphic relation and synthesizes meaningful inputs.Our evaluation using three driving models shows that RMT can generate meaningful test cases on which 89% of erroneous predictions are found by enabling higher-order metamorphic relations. Compositionality provides further aids for generating meaningful, synthesized inputs-3012 new images are generated by compositional rules. These detected erroneous predictions are manually examined and confirmed by six human judges as meaningful traffic rule violations. RMT is the first to expand automated testing capability for autonomous vehicles by enabling easy mapping of traffic regulations to executable metamorphic relations and to demonstrate the benefits of expressivity, customization, and pluggability

    Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous Car Case Study

    Get PDF
    There is a growing body of research on developing testing techniques for Deep Neural Networks (DNN). We distinguish two general modes of testing for DNNs: Offline testing where DNNs are tested as individual units based on test datasets obtained independently from the DNNs under test, and online testing where DNNs are embedded into a specific application and tested in a close-loop mode in interaction with the application environment. In addition, we identify two sources for generating test datasets for DNNs: Datasets obtained from real-life and datasets generated by simulators. While offline testing can be used with datasets obtained from either sources, online testing is largely confined to using simulators since online testing within real-life applications can be time-consuming, expensive and dangerous. In this paper, we study the following two important questions aiming to compare test datasets and testing modes for DNNs: First, can we use simulator-generated data as a reliable substitute to real-world data for the purpose of DNN testing? Second, how do online and offline testing results differ and complement each other? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end control of cars' steering actuators. Our results show that simulator-generated datasets are able to yield DNN prediction errors that are similar to those obtained by testing DNNs with real-life datasets. Further, offline testing is more optimistic than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing
    corecore