2 research outputs found

    Falsification of Cyber-Physical Systems with Robustness-Guided Black-Box Checking

    Full text link
    For exhaustive formal verification, industrial-scale cyber-physical systems (CPSs) are often too large and complex, and lightweight alternatives (e.g., monitoring and testing) have attracted the attention of both industrial practitioners and academic researchers. Falsification is one popular testing method of CPSs utilizing stochastic optimization. In state-of-the-art falsification methods, the result of the previous falsification trials is discarded, and we always try to falsify without any prior knowledge. To concisely memorize such prior information on the CPS model and exploit it, we employ Black-box checking (BBC), which is a combination of automata learning and model checking. Moreover, we enhance BBC using the robust semantics of STL formulas, which is the essential gadget in falsification. Our experiment results suggest that our robustness-guided BBC outperforms a state-of-the-art falsification tool.Comment: Accepted to HSCC 202

    Verification of machine learning based cyber-physical systems: a comparative study

    Get PDF
    In this paper, we conduct a comparison of the existing formal methods for verifying the safety of cyber-physical systems with machine learning based controllers. We focus on a particular form of machine learning based controller, namely a classifier based on multiple neural networks, the architecture of which is particularly interesting for embedded applications. We compare both exact and approximate verification techniques, based on several real-world benchmarks such as a collision avoidance system for unmanned aerial vehicles
    corecore