18 research outputs found

    Falsification of Cyber-Physical Systems with Robustness-Guided Black-Box Checking

    Full text link
    For exhaustive formal verification, industrial-scale cyber-physical systems (CPSs) are often too large and complex, and lightweight alternatives (e.g., monitoring and testing) have attracted the attention of both industrial practitioners and academic researchers. Falsification is one popular testing method of CPSs utilizing stochastic optimization. In state-of-the-art falsification methods, the result of the previous falsification trials is discarded, and we always try to falsify without any prior knowledge. To concisely memorize such prior information on the CPS model and exploit it, we employ Black-box checking (BBC), which is a combination of automata learning and model checking. Moreover, we enhance BBC using the robust semantics of STL formulas, which is the essential gadget in falsification. Our experiment results suggest that our robustness-guided BBC outperforms a state-of-the-art falsification tool.Comment: Accepted to HSCC 202

    ViSpec: A graphical tool for elicitation of MTL requirements

    Full text link
    One of the main barriers preventing widespread use of formal methods is the elicitation of formal specifications. Formal specifications facilitate the testing and verification process for safety critical robotic systems. However, handling the intricacies of formal languages is difficult and requires a high level of expertise in formal logics that many system developers do not have. In this work, we present a graphical tool designed for the development and visualization of formal specifications by people that do not have training in formal logic. The tool enables users to develop specifications using a graphical formalism which is then automatically translated to Metric Temporal Logic (MTL). In order to evaluate the effectiveness of our tool, we have also designed and conducted a usability study with cohorts from the academic student community and industry. Our results indicate that both groups were able to define formal requirements with high levels of accuracy. Finally, we present applications of our tool for defining specifications for operation of robotic surgery and autonomous quadcopter safe operation.Comment: Technical report for the paper to be published in the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems held in Hamburg, Germany. Includes 10 pages and 19 figure

    An Efficient Algorithm for Monitoring Practical TPTL Specifications

    Full text link
    We provide a dynamic programming algorithm for the monitoring of a fragment of Timed Propositional Temporal Logic (TPTL) specifications. This fragment of TPTL, which is more expressive than Metric Temporal Logic, is characterized by independent time variables which enable the elicitation of complex real-time requirements. For this fragment, we provide an efficient polynomial time algorithm for off-line monitoring of finite traces. Finally, we provide experimental results on a prototype implementation of our tool in order to demonstrate the feasibility of using our tool in practical applications

    Conformance Testing as Falsification for Cyber-Physical Systems

    Full text link
    In Model-Based Design of Cyber-Physical Systems (CPS), it is often desirable to develop several models of varying fidelity. Models of different fidelity levels can enable mathematical analysis of the model, control synthesis, faster simulation etc. Furthermore, when (automatically or manually) transitioning from a model to its implementation on an actual computational platform, then again two different versions of the same system are being developed. In all previous cases, it is necessary to define a rigorous notion of conformance between different models and between models and their implementations. This paper argues that conformance should be a measure of distance between systems. Albeit a range of theoretical distance notions exists, a way to compute such distances for industrial size systems and models has not been proposed yet. This paper addresses exactly this problem. A universal notion of conformance as closeness between systems is rigorously defined, and evidence is presented that this implies a number of other application-dependent conformance notions. An algorithm for detecting that two systems are not conformant is then proposed, which uses existing proven tools. A method is also proposed to measure the degree of conformance between two systems. The results are demonstrated on a range of models

    Safety Under Uncertainty: Tight Bounds with Risk-Aware Control Barrier Functions

    Full text link
    We propose a novel class of risk-aware control barrier functions (RA-CBFs) for the control of stochastic safety-critical systems. Leveraging a result from the stochastic level-crossing literature, we deviate from the martingale theory that is currently used in stochastic CBF techniques and prove that a RA-CBF based control synthesis confers a tighter upper bound on the probability of the system becoming unsafe within a finite time interval than existing approaches. We highlight the advantages of our proposed approach over the state-of-the-art via a comparative study on an mobile-robot example, and further demonstrate its viability on an autonomous vehicle highway merging problem in dense traffic.Comment: 7 pages, 4 figures, 5 tables, accepted at ICRA 202

    Robust Conformal Prediction for STL Runtime Verification under Distribution Shift

    Full text link
    Cyber-physical systems (CPS) designed in simulators behave differently in the real-world. Once they are deployed in the real-world, we would hence like to predict system failures during runtime. We propose robust predictive runtime verification (RPRV) algorithms under signal temporal logic (STL) tasks for general stochastic CPS. The RPRV problem faces several challenges: (1) there may not be sufficient data of the behavior of the deployed CPS, (2) predictive models are based on a distribution over system trajectories encountered during the design phase, i.e., there may be a distribution shift during deployment. To address these challenges, we assume to know an upper bound on the statistical distance (in terms of an f-divergence) between the distributions at deployment and design time, and we utilize techniques based on robust conformal prediction. Motivated by our results in [1], we construct an accurate and an interpretable RPRV algorithm. We use a trajectory prediction model to estimate the system behavior at runtime and robust conformal prediction to obtain probabilistic guarantees by accounting for distribution shifts. We precisely quantify the relationship between calibration data, desired confidence, and permissible distribution shift. To the best of our knowledge, these are the first statistically valid algorithms under distribution shift in this setting. We empirically validate our algorithms on a Franka manipulator within the NVIDIA Isaac sim environment

    Neural Network Repair with Reachability Analysis

    Get PDF
    Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled cyber-physical systems (Le-CPS) under adversarial attacks, model uncertainties, and sensing errors is essential for safe autonomy. This research proposes a framework to repair unsafe DNNs in safety-critical systems with reachability analysis. The repair process is inspired by adversarial training which has demonstrated high effectiveness in improving the safety and robustness of DNNs. Different from traditional adversarial training approaches where adversarial examples are utilized from random attacks and may not be representative of all unsafe behaviors, our repair process uses reachability analysis to compute the exact unsafe regions and identify sufficiently representative examples to enhance the efficacy and efficiency of the adversarial training. The performance of our repair framework is evaluated on two types of benchmarks without safe models as references. One is a DNN controller for aircraft collision avoidance with access to training data. The other is a rocket lander where our framework can be seamlessly integrated with the well-known deep deterministic policy gradient (DDPG) reinforcement learning algorithm. The experimental results show that our framework can successfully repair all instances on multiple safety specifications with negligible performance degradation. In addition, to increase the computational and memory efficiency of the reachability analysis algorithm in the framework, we propose a depth-first-search algorithm that combines an existing exact analysis method with an over-approximation approach based on a new set representation. Experimental results show that our method achieves a five-fold improvement in runtime and a two-fold improvement in memory usage compared to exact analysis
    corecore