2,963 research outputs found

    Collaborative Verification-Driven Engineering of Hybrid Systems

    Full text link
    Hybrid systems with both discrete and continuous dynamics are an important model for real-world cyber-physical systems. The key challenge is to ensure their correct functioning w.r.t. safety requirements. Promising techniques to ensure safety seem to be model-driven engineering to develop hybrid systems in a well-defined and traceable manner, and formal verification to prove their correctness. Their combination forms the vision of verification-driven engineering. Often, hybrid systems are rather complex in that they require expertise from many domains (e.g., robotics, control systems, computer science, software engineering, and mechanical engineering). Moreover, despite the remarkable progress in automating formal verification of hybrid systems, the construction of proofs of complex systems often requires nontrivial human guidance, since hybrid systems verification tools solve undecidable problems. It is, thus, not uncommon for development and verification teams to consist of many players with diverse expertise. This paper introduces a verification-driven engineering toolset that extends our previous work on hybrid and arithmetic verification with tools for (i) graphical (UML) and textual modeling of hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. This toolset makes it easier to tackle large-scale verification tasks

    CHARDA: Causal Hybrid Automata Recovery via Dynamic Analysis

    Full text link
    We propose and evaluate a new technique for learning hybrid automata automatically by observing the runtime behavior of a dynamical system. Working from a sequence of continuous state values and predicates about the environment, CHARDA recovers the distinct dynamic modes, learns a model for each mode from a given set of templates, and postulates causal guard conditions which trigger transitions between modes. Our main contribution is the use of information-theoretic measures (1)~as a cost function for data segmentation and model selection to penalize over-fitting and (2)~to determine the likely causes of each transition. CHARDA is easily extended with different classes of model templates, fitting methods, or predicates. In our experiments on a complex videogame character, CHARDA successfully discovers a reasonable over-approximation of the character's true behaviors. Our results also compare favorably against recent work in automatically learning probabilistic timed automata in an aircraft domain: CHARDA exactly learns the modes of these simpler automata.Comment: 7 pages, 2 figures. Accepted for IJCAI 201

    Falsification of Cyber-Physical Systems with Robustness-Guided Black-Box Checking

    Full text link
    For exhaustive formal verification, industrial-scale cyber-physical systems (CPSs) are often too large and complex, and lightweight alternatives (e.g., monitoring and testing) have attracted the attention of both industrial practitioners and academic researchers. Falsification is one popular testing method of CPSs utilizing stochastic optimization. In state-of-the-art falsification methods, the result of the previous falsification trials is discarded, and we always try to falsify without any prior knowledge. To concisely memorize such prior information on the CPS model and exploit it, we employ Black-box checking (BBC), which is a combination of automata learning and model checking. Moreover, we enhance BBC using the robust semantics of STL formulas, which is the essential gadget in falsification. Our experiment results suggest that our robustness-guided BBC outperforms a state-of-the-art falsification tool.Comment: Accepted to HSCC 202
    • …
    corecore