138 research outputs found

    Model-Checking Counting Temporal Logics on Flat Structures

    Get PDF
    We study several extensions of linear-time and computation-tree temporal logics with quantifiers that allow for counting how often certain properties hold. For most of these extensions, the model-checking problem is undecidable, but we show that decidability can be recovered by considering flat Kripke structures where each state belongs to at most one simple loop. Most decision procedures are based on results on (flat) counter systems where counters are used to implement the evaluation of counting operators

    Activity Report: Automatic Control 2011

    Get PDF

    A Framework for Certified Self-Stabilization

    No full text
    We propose a general framework to build certified proofs of distributed self-stabilizing algorithms with the proof assistant Coq. We first define in Coq the locally shared memory model with composite atomicity, the most commonly used model in the self-stabilizing area. We then validate our framework by certifying a non trivial part of an existing silent self-stabilizing algorithm which builds a kk-hop dominating set of the network. We also certified a quantitative property related to the output of this algorithm. Precisely, we show that the computed kk-hop dominating set contains at most n1k+1+1\lfloor \frac{n-1}{k+1} \rfloor + 1 nodes, where nn is the number of nodes in the network. To obtain these results, we also developed a library which contains general tools related to potential functions and cardinality of sets

    RML: Runtime Monitoring Language

    Get PDF
    Runtime verification is a relatively new software verification technique that aims to prove the correctness of a specific run of a program, rather than statically verify the code. The program is instrumented in order to collect all the relevant information, and the resulting trace of events is inspected by a monitor that verifies its compliance with respect to a specification of the expected properties of the system under scrutiny. Many languages exist that can be used to formally express the expected behavior of a system, with different design choices and degrees of expressivity. This thesis presents RML, a specification language designed for runtime verification, with the goal of being completely modular and independent from the instrumentation and the kind of system being monitored. RML is highly expressive, and allows one to express complex, parametric, non-context-free properties concisely. RML is compiled down to TC, a lower level calculus, which is fully formalized with a deterministic, rewriting-based semantics. In order to evaluate the approach, an open source implementation has been developed, and several examples with Node.js programs have been tested. Benchmarks show the ability of the monitors automatically generated from RML specifications to effectively and efficiently verify complex properties

    Introduction to Runtime Verification

    Get PDF
    International audienceThe aim of this chapter is to act as a primer for those wanting to learn about Runtime Verification (RV). We start by providing an overview of the main specification languages used for RV. We then introduce the standard terminology necessary to describe the monitoring problem, covering the pragmatic issues of monitoring and instrumentation, and discussing extensively the monitorability problem

    On Supervisor Synthesis via Active Automata Learning

    Get PDF
    Our society\u27s reliance on computer-controlled systems is rapidly growing. Such systems are found in various devices, ranging from simple light switches to safety-critical systems like autonomous vehicles. In the context of safety-critical systems, safety and correctness are of utmost importance. Faults and errors could have catastrophic consequences. Thus, there is a need for rigorous methodologies that help provide guarantees of safety and correctness. Supervisor synthesis, the concept of being able to mathematically synthesize a supervisor that ensures that the closed-loop system behaves in accordance with known requirements, can indeed help.This thesis introduces supervisor learning, an approach to help automate the learning of supervisors in the absence of plant models. Traditionally, supervisor synthesis makes use of plant models and specification models to obtain a supervisor. Industrial adoption of this method is limited due to, among other things, the difficulty in obtaining usable plant models. Manually creating these plant models is an error-prone and time-consuming process. Thus, supervisor learning intends to improve the industrial adoption of supervisory control by automating the process of generating supervisors in the absence of plant models.The idea here is to learn a supervisor for the system under learning (SUL) by active interaction and experimentation. To this end, we present two algorithms, SupL*, and MSL, that directly learn supervisors when provided with a simulator of the SUL and its corresponding specifications. SupL* is a language-based learner that learns one supervisor for the entire system. MSL, on the other hand, learns a modular supervisor, that is, several smaller supervisors, one for each specification. Additionally, a third algorithm, MPL, is introduced for learning a modular plant model.The approach is realized in the tool MIDES and has been used to learn supervisors in a virtual manufacturing setting for the Machine Buffer Machine example, as well as learning a model of the Lateral State Manager, a sub-component of a self-driving car. These case studies show the feasibility and applicability of the proposed approach, in addition to helping identify future directions for research

    A Taxonomy of Quality Metrics for Cloud Services

    Full text link
    [EN] A large number of metrics with which to assess the quality of cloud services have been proposed over the last years. However, this knowledge is still dispersed, and stakeholders have little or no guidance when choosing metrics that will be suitable to evaluate their cloud services. The objective of this paper is, therefore, to systematically identify, taxonomically classify, and compare existing quality of service (QoS) metrics in the cloud computing domain. We conducted a systematic literature review of 84 studies selected from a set of 4333 studies that were published from 2006 to November 2018. We specifically identified 470 metric operationalizations that were then classified using a taxonomy, which is also introduced in this paper. The data extracted from the metrics were subsequently analyzed using thematic analysis. The findings indicated that most metrics evaluate quality attributes related to performance efficiency (64%) and that there is a need for metrics that evaluate other characteristics, such as security and compatibility. The majority of the metrics are used during the Operation phase of the cloud services and are applied to the running service. Our results also revealed that metrics for cloud services are still in the early stages of maturity only 10% of the metrics had been empirically validated. The proposed taxonomy can be used by practitioners as a guideline when specifying service level objectives or deciding which metric is best suited to the evaluation of their cloud services, and by researchers as a comprehensive quality framework in which to evaluate their approaches.This work was supported by the Spanish Ministry of Science, Innovation and Universities through the Adapt@Cloud Project under Grant TIN2017-84550-R. The work of Ximena Guerron was supported in part by the Universidad Central del Ecuador (UCE), and in part by the Banco Central del Ecuador.Guerron, X.; Abrahao Gonzales, SM.; Insfran, E.; Fernández-Diego, M.; González-Ladrón-De-Guevara, F. (2020). A Taxonomy of Quality Metrics for Cloud Services. IEEE Access. 8:131461-131498. https://doi.org/10.1109/ACCESS.2020.3009079S131461131498

    Explanation of the Model Checker Verification Results

    Get PDF
    Immer wenn neue Anforderungen an ein System gestellt werden, müssen die Korrektheit und Konsistenz der Systemspezifikation überprüft werden, was in der Praxis in der Regel manuell erfolgt. Eine mögliche Option, um die Nachteile dieser manuellen Analyse zu überwinden, ist das sogenannte Contract-Based Design. Dieser Entwurfsansatz kann den Verifikationsprozess zur Überprüfung, ob die Anforderungen auf oberster Ebene konsistent verfeinert wurden, automatisieren. Die Verifikation kann somit iterativ durchgeführt werden, um die Korrektheit und Konsistenz des Systems angesichts jeglicher Änderung der Spezifikationen sicherzustellen. Allerdings ist es aufgrund der mangelnden Benutzerfreundlichkeit und der Schwierigkeiten bei der Interpretation von Verifizierungsergebnissen immer noch eine Herausforderung, formale Ansätze in der Industrie einzusetzen. Stellt beispielsweise der Model Checker bei der Verifikation eine Inkonsistenz fest, generiert er ein Gegenbeispiel (Counterexample) und weist gleichzeitig darauf hin, dass die gegebenen Eingabespezifikationen inkonsistent sind. Hier besteht die gewaltige Herausforderung darin, das generierte Gegenbeispiel zu verstehen, das oft sehr lang, kryptisch und komplex ist. Darüber hinaus liegt es in der Verantwortung der Ingenieurin bzw. des Ingenieurs, die inkonsistente Spezifikation in einer potenziell großen Menge von Spezifikationen zu identifizieren. Diese Arbeit schlägt einen Ansatz zur Erklärung von Gegenbeispielen (Counterexample Explanation Approach) vor, der die Verwendung von formalen Methoden vereinfacht und fördert, indem benutzerfreundliche Erklärungen der Verifikationsergebnisse der Ingenieurin bzw. dem Ingenieur präsentiert werden. Der Ansatz zur Erklärung von Gegenbeispielen wird mittels zweier Methoden evaluiert: (1) Evaluation anhand verschiedener Anwendungsbeispiele und (2) eine Benutzerstudie in Form eines One-Group Pretest-Posttest Experiments.Whenever new requirements are introduced for a system, the correctness and consistency of the system specification must be verified, which is often done manually in industrial settings. One viable option to traverse disadvantages of this manual analysis is to employ the contract-based design, which can automate the verification process to determine whether the refinements of top-level requirements are consistent. Thus, verification can be performed iteratively to ensure the system’s correctness and consistency in the face of any change in specifications. Having said that, it is still challenging to deploy formal approaches in industries due to their lack of usability and their difficulties in interpreting verification results. For instance, if the model checker identifies inconsistency during the verification, it generates a counterexample while also indicating that the given input specifications are inconsistent. Here, the formidable challenge is to comprehend the generated counterexample, which is often lengthy, cryptic, and complex. Furthermore, it is the engineer’s responsibility to identify the inconsistent specification among a potentially huge set of specifications. This PhD thesis proposes a counterexample explanation approach for formal methods that simplifies and encourages their use by presenting user-friendly explanations of the verification results. The proposed counterexample explanation approach identifies and explains relevant information from the verification result in what seems like a natural language statement. The counterexample explanation approach extracts relevant information by identifying inconsistent specifications from among the set of specifications, as well as erroneous states and variables from the counterexample. The counterexample explanation approach is evaluated using two methods: (1) evaluation with different application examples, and (2) a user-study known as one-group pretest and posttest experiment
    corecore