133 research outputs found

    Falsification-Based Robust Adversarial Reinforcement Learning

    Full text link
    Reinforcement learning (RL) has achieved tremendous progress in solving various sequential decision-making problems, e.g., control tasks in robotics. However, RL methods often fail to generalize to safety-critical scenarios since policies are overfitted to training environments. Previously, robust adversarial reinforcement learning (RARL) was proposed to train an adversarial network that applies disturbances to a system, which improves robustness in test scenarios. A drawback of neural-network-based adversaries is that integrating system requirements without handcrafting sophisticated reward signals is difficult. Safety falsification methods allow one to find a set of initial conditions as well as an input sequence, such that the system violates a given property formulated in temporal logic. In this paper, we propose falsification-based RARL (FRARL), the first generic framework for integrating temporal-logic falsification in adversarial learning to improve policy robustness. With falsification method, we do not need to construct an extra reward function for the adversary. We evaluate our approach on a braking assistance system and an adaptive cruise control system of autonomous vehicles. Experiments show that policies trained with a falsification-based adversary generalize better and show less violation of the safety specification in test scenarios than the ones trained without an adversary or with an adversarial network.Comment: 11 pages, 3 figure

    Test Case Generation for Drivability Requirements of an Automotive Cruise Controller: An Experience with an Industrial Simulator

    Full text link
    Automotive software development requires engineers to test their systems to detect violations of both functional and drivability requirements. Functional requirements define the functionality of the automotive software. Drivability requirements refer to the driver's perception of the interactions with the vehicle; for example, they typically require limiting the acceleration and jerk perceived by the driver within given thresholds. While functional requirements are extensively considered by the research literature, drivability requirements garner less attention. This industrial paper describes our experience assessing the usefulness of an automated search-based software testing (SBST) framework in generating failure-revealing test cases for functional and drivability requirements. Our experience concerns the VI-CarRealTime simulator, an industrial virtual modeling and simulation environment widely used in the automotive domain. We designed a Cruise Control system in Simulink for a four-wheel vehicle, in an iterative fashion, by producing 21 model versions. We used the SBST framework for each version of the model to search for failure-revealing test cases revealing requirement violations. Our results show that the SBST framework successfully identified a failure-revealing test case for 66.7% of our model versions, requiring, on average, 245.9s and 3.8 iterations. We present lessons learned, reflect on the generality of our results, and discuss how our results improve the state of practice.Comment: 10 pages papaer plus 2 of bibliography. 10 figures and 6 table

    Machine Learning Based Methods for Virtual Validation of Autonomous Driving

    Get PDF
    During the last decade, automotive manufacturers have introduced increasingly capable driving automation functions in consumer vehicles. As the functionality becomes more advanced, the task of driving moves from the human to the car. Hence, making sure that autonomous driving (AD) functions are reliable and safe is of high importance. Often, increased levels of automation result in more complex safety validation procedures, that may be both expensive, time consuming, and dangerous to perform. One way to address these problems is to move parts of the validation to the virtual domain. In this thesis, we investigate methods for validating AD functionality in virtual simulation environments, using methods from machine learning and statistics. The main focus is on how to make virtual simulations resemble real-world conditions as closely as possible. We tackle this with an approach based on sensor error modeling. Specifically, we develop a statistical sensor error model that can be used to make ideal object measurements from simulations resemble measurements obtained from the perception system of a real-world vehicle. The model, which is based on autoregressive recurrent mixture density networks, was trained on sensor error data collected on European roads. The second part considers system falsification using reinforcement learning (RL); a flexible framework for validation of system safety, which naturally allows for the integration of, e.g., sensor error models. We compare results of system falsification using RL to an exact approach based on reachability analysis.With this thesis, we take steps towards more realistic statistical sensor error models for virtual simulation environments. We also demonstrate that approximate methods based on reinforcement learning may serve as an alternative to reachability analysis for validation of high-dimensional systems. Finally, we connect the RL falsification application to sensor error modeling as a possible direction for future research

    Formal Verification of Safety Critical Autonomous Systems via Bayesian Optimization

    Get PDF
    As control systems become increasingly more complex, there exists a pressing need to find systematic ways of verifying them. To address this concern, there has been significant work in developing test generation schemes for black-box control architectures. These schemes test a black-box control architecture's ability to satisfy its control objectives, when these objectives are expressed as operational specifications through temporal logic formulae. Our work extends these prior, model based results by lower bounding the probability by which the black-box system will satisfy its operational specification, when subject to a pre-specified set of environmental phenomena. We do so by systematically generating tests to minimize a Lipschitz continuous robustness measure for the operational specification. We demonstrate our method with experimental results, wherein we show that our framework can reasonably lower bound the probability of specification satisfaction

    Search-based Test Generation for Automated Driving Systems: From Perception to Control Logic

    Get PDF
    abstract: Automated driving systems are in an intensive research and development stage, and the companies developing these systems are targeting to deploy them on public roads in a very near future. Guaranteeing safe operation of these systems is crucial as they are planned to carry passengers and share the road with other vehicles and pedestrians. Yet, there is no agreed-upon approach on how and in what detail those systems should be tested. Different organizations have different testing approaches, and one common approach is to combine simulation-based testing with real-world driving. One of the expectations from fully-automated vehicles is never to cause an accident. However, an automated vehicle may not be able to avoid all collisions, e.g., the collisions caused by other road occupants. Hence, it is important for the system designers to understand the boundary case scenarios where an autonomous vehicle can no longer avoid a collision. Besides safety, there are other expectations from automated vehicles such as comfortable driving and minimal fuel consumption. All safety and functional expectations from an automated driving system should be captured with a set of system requirements. It is challenging to create requirements that are unambiguous and usable for the design, testing, and evaluation of automated driving systems. Another challenge is to define useful metrics for assessing the testing quality because in general, it is impossible to test every possible scenario. The goal of this dissertation is to formalize the theory for testing automated vehicles. Various methods for automatic test generation for automated-driving systems in simulation environments are presented and compared. The contributions presented in this dissertation include (i) new metrics that can be used to discover the boundary cases between safe and unsafe driving conditions, (ii) a new approach that combines combinatorial testing and optimization-guided test generation methods, (iii) approaches that utilize global optimization methods and random exploration to generate critical vehicle and pedestrian trajectories for testing purposes, (iv) a publicly-available simulation-based automated vehicle testing framework that enables application of the existing testing approaches in the literature, including the new approaches presented in this dissertation.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201
    • …
    corecore