371 research outputs found

    Falsification of Cyber-Physical Systems with Robustness-Guided Black-Box Checking

    Full text link
    For exhaustive formal verification, industrial-scale cyber-physical systems (CPSs) are often too large and complex, and lightweight alternatives (e.g., monitoring and testing) have attracted the attention of both industrial practitioners and academic researchers. Falsification is one popular testing method of CPSs utilizing stochastic optimization. In state-of-the-art falsification methods, the result of the previous falsification trials is discarded, and we always try to falsify without any prior knowledge. To concisely memorize such prior information on the CPS model and exploit it, we employ Black-box checking (BBC), which is a combination of automata learning and model checking. Moreover, we enhance BBC using the robust semantics of STL formulas, which is the essential gadget in falsification. Our experiment results suggest that our robustness-guided BBC outperforms a state-of-the-art falsification tool.Comment: Accepted to HSCC 202

    Conformal Prediction for STL Runtime Verification

    Full text link
    We are interested in predicting failures of cyber-physical systems during their operation. Particularly, we consider stochastic systems and signal temporal logic specifications, and we want to calculate the probability that the current system trajectory violates the specification. The paper presents two predictive runtime verification algorithms that predict future system states from the current observed system trajectory. As these predictions may not be accurate, we construct prediction regions that quantify prediction uncertainty by using conformal prediction, a statistical tool for uncertainty quantification. Our first algorithm directly constructs a prediction region for the satisfaction measure of the specification so that we can predict specification violations with a desired confidence. The second algorithm constructs prediction regions for future system states first, and uses these to obtain a prediction region for the satisfaction measure. To the best of our knowledge, these are the first formal guarantees for a predictive runtime verification algorithm that applies to widely used trajectory predictors such as RNNs and LSTMs, while being computationally simple and making no assumptions on the underlying distribution. We present numerical experiments of an F-16 aircraft and a self-driving car

    Conformance Testing for Stochastic Cyber-Physical Systems

    Full text link
    Conformance is defined as a measure of distance between the behaviors of two dynamical systems. The notion of conformance can accelerate system design when models of varying fidelities are available on which analysis and control design can be done more efficiently. Ultimately, conformance can capture distance between design models and their real implementations and thus aid in robust system design. In this paper, we are interested in the conformance of stochastic dynamical systems. We argue that probabilistic reasoning over the distribution of distances between model trajectories is a good measure for stochastic conformance. Additionally, we propose the non-conformance risk to reason about the risk of stochastic systems not being conformant. We show that both notions have the desirable transference property, meaning that conformant systems satisfy similar system specifications, i.e., if the first model satisfies a desirable specification, the second model will satisfy (nearly) the same specification. Lastly, we propose how stochastic conformance and the non-conformance risk can be estimated from data using statistical tools such as conformal prediction. We present empirical evaluations of our method on an F-16 aircraft, an autonomous vehicle, a spacecraft, and Dubin's vehicle

    Combining Machine Learning and Formal Methods for Complex Systems Design

    Get PDF
    During the last 20 years, model-based design has become a standard practice in many fields such as automotive, aerospace engineering, systems and synthetic biology. This approach allows a considerable improvement of the final product quality and reduces the overall prototyping costs. In these contexts, formal methods, such as temporal logics, and model checking approaches have been successfully applied. They allow a precise description and automatic verification of the prototype's requirements. In the recent past, the increasing market requests for performing and safer devices shows an unstoppable growth which inevitably brings to the creation of more and more complicated devices. The rise of cyber-physical systems, which are on their way to become massively pervasive, brings the complexity level to the next step and open many new challenges. First, the descriptive power of standard temporal logics is no more sufficient to handle all kind of requirements the designers need (consider, for example, non-functional requirements). Second, the standard model checking techniques are unable to manage such level of complexity (consider the well-known curse of state space explosion). In this thesis, we leverage machine learning techniques, active learning, and optimization approaches to face the challenges mentioned above. In particular, we define signal measure logic, a novel temporal logic suited to describe non-functional requirements. We also use evolutionary algorithms and signal temporal logic to tackle a supervised classification problem and a system design problem which involves multiple conflicting requirements (i.e., multi-objective optimization problems). Finally, we use an active learning approach, based on Gaussian processes, to deal with falsification problems in the automotive field and to solve a so-called threshold synthesis problem, discussing an epidemics case study.During the last 20 years, model-based design has become a standard practice in many fields such as automotive, aerospace engineering, systems and synthetic biology. This approach allows a considerable improvement of the final product quality and reduces the overall prototyping costs. In these contexts, formal methods, such as temporal logics, and model checking approaches have been successfully applied. They allow a precise description and automatic verification of the prototype's requirements. In the recent past, the increasing market requests for performing and safer devices shows an unstoppable growth which inevitably brings to the creation of more and more complicated devices. The rise of cyber-physical systems, which are on their way to become massively pervasive, brings the complexity level to the next step and open many new challenges. First, the descriptive power of standard temporal logics is no more sufficient to handle all kind of requirements the designers need (consider, for example, non-functional requirements). Second, the standard model checking techniques are unable to manage such level of complexity (consider the well-known curse of state space explosion). In this thesis, we leverage machine learning techniques, active learning, and optimization approaches to face the challenges mentioned above. In particular, we define signal measure logic, a novel temporal logic suited to describe non-functional requirements. We also use evolutionary algorithms and signal temporal logic to tackle a supervised classification problem and a system design problem which involves multiple conflicting requirements (i.e., multi-objective optimization problems). Finally, we use an active learning approach, based on Gaussian processes, to deal with falsification problems in the automotive field and to solve a so-called threshold synthesis problem, discussing an epidemics case study

    An STL-based Approach to Resilient Control for Cyber-Physical Systems

    Full text link
    We present ResilienC, a framework for resilient control of Cyber-Physical Systems subject to STL-based requirements. ResilienC utilizes a recently developed formalism for specifying CPS resiliency in terms of sets of (rec,dur)(\mathit{rec},\mathit{dur}) real-valued pairs, where rec\mathit{rec} represents the system's capability to rapidly recover from a property violation (recoverability), and dur\mathit{dur} is reflective of its ability to avoid violations post-recovery (durability). We define the resilient STL control problem as one of multi-objective optimization, where the recoverability and durability of the desired STL specification are maximized. When neither objective is prioritized over the other, the solution to the problem is a set of Pareto-optimal system trajectories. We present a precise solution method to the resilient STL control problem using a mixed-integer linear programming encoding and an a posteriori ϵ\epsilon-constraint approach for efficiently retrieving the complete set of optimally resilient solutions. In ResilienC, at each time-step, the optimal control action selected from the set of Pareto-optimal solutions by a Decision Maker strategy realizes a form of Model Predictive Control. We demonstrate the practical utility of the ResilienC framework on two significant case studies: autonomous vehicle lane keeping and deadline-driven, multi-region package delivery.Comment: 11 pages, 6 figure

    On Optimization-Based Falsification of Cyber-Physical Systems

    Get PDF
    In what is commonly referred to as cyber-physical systems (CPSs), computational and physical resources are closely interconnected. An example is the closed-loop behavior of perception, planning, and control algorithms, executing on a computer and interacting with a physical environment. Many CPSs are safety-critical, and it is thus important to guarantee that they behave according to given specifications that define the correct behavior. CPS models typically include differential equations, state machines, and code written in general-purpose programming languages. This heterogeneity makes it generally not feasible to use analytical methods to evaluate the system’s correctness. Instead, model-based testing of a simulation of the system is more viable. Optimization-based falsification is an approach to, using a simulation model, automatically check for the existence of input signals that make the CPS violate given specifications. Quantitative semantics estimate how far the specification is from being violated for a given scenario. The decision variables in the optimization problems are parameters that determine the type and shape of generated input signals. This thesis contributes to the increased efficiency of optimization-based falsification in four ways. (i) A method for using multiple quantitative semantics during optimization-based falsification. (ii) A direct search approach, called line-search falsification that prioritizes extreme values, which are known to often falsify specifications, and has a good balance between exploration and exploitation of the parameter space. (iii) An adaptation of Bayesian optimization that allows for injecting prior knowledge and uses a special acquisition function for finding falsifying points rather than the global minima. (iv) An investigation of different input signal parameterizations and their coverability of the space and time and frequency domains. The proposed methods have been implemented and evaluated on standard falsification benchmark problems. Based on these empirical studies, we show the efficiency of the proposed methods. Taken together, the proposed methods are important contributions to the falsification of CPSs and in enabling a more efficient falsification process

    KnowSafe: Combined Knowledge and Data Driven Hazard Mitigation in Artificial Pancreas Systems

    Full text link
    Significant progress has been made in anomaly detection and run-time monitoring to improve the safety and security of cyber-physical systems (CPS). However, less attention has been paid to hazard mitigation. This paper proposes a combined knowledge and data driven approach, KnowSafe, for the design of safety engines that can predict and mitigate safety hazards resulting from safety-critical malicious attacks or accidental faults targeting a CPS controller. We integrate domain-specific knowledge of safety constraints and context-specific mitigation actions with machine learning (ML) techniques to estimate system trajectories in the far and near future, infer potential hazards, and generate optimal corrective actions to keep the system safe. Experimental evaluation on two realistic closed-loop testbeds for artificial pancreas systems (APS) and a real-world clinical trial dataset for diabetes treatment demonstrates that KnowSafe outperforms the state-of-the-art by achieving higher accuracy in predicting system state trajectories and potential hazards, a low false positive rate, and no false negatives. It also maintains the safe operation of the simulated APS despite faults or attacks without introducing any new hazards, with a hazard mitigation success rate of 92.8%, which is at least 76% higher than solely rule-based (50.9%) and data-driven (52.7%) methods.Comment: 16 pages, 10 figures, 9 tables, submitted to the IEEE for possible publicatio
    • …
    corecore