85 research outputs found

    Applications of Description Logic and Causality in Model Checking

    Get PDF
    Model checking is an automated technique for the verification of finite-state systems that is widely used in practice. In model checking, a model M is verified against a specification φ\varphi, exhaustively checking that the tree of all computations of M satisfies φ\varphi. When φ\varphi fails to hold in M, the negative result is accompanied by a counterexample: a computation in M that demonstrates the failure. State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore alternative technologies, in the hope of broadening the applicability of the technique to a wider class of systems. Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic. DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies. This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems. We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of encodings that translate such problems into consistency queries in DL. Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can outperform existing ones. In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure

    Optimizing Computation of Recovery Plans for BPEL Applications

    Full text link
    Web service applications are distributed processes that are composed of dynamically bounded services. In our previous work [15], we have described a framework for performing runtime monitoring of web service against behavioural correctness properties (described using property patterns and converted into finite state automata). These specify forbidden behavior (safety properties) and desired behavior (bounded liveness properties). Finite execution traces of web services described in BPEL are checked for conformance at runtime. When violations are discovered, our framework automatically proposes and ranks recovery plans which users can then select for execution. Such plans for safety violations essentially involve "going back" - compensating the executed actions until an alternative behaviour of the application is possible. For bounded liveness violations, recovery plans include both "going back" and "re-planning" - guiding the application towards a desired behaviour. Our experience, reported in [16], identified a drawback in this approach: we compute too many plans due to (a) overapproximating the number of program points where an alternative behaviour is possible and (b) generating recovery plans for bounded liveness properties which can potentially violate safety properties. In this paper, we describe improvements to our framework that remedy these problems and describe their effectiveness on a case study.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330

    Symbolic Model Checking of Product-Line Requirements Using SAT-Based Methods

    Get PDF
    Product line (PL) engineering promotes the de- velopment of families of related products, where individual products are differentiated by which optional features they include. Modelling and analyzing requirements models of PLs allows for early detection and correction of requirements errors – including unintended feature interactions, which are a serious problem in feature-rich systems. A key challenge in analyzing PL requirements is the efficient verification of the product family, given that the number of products is too large to be verified one at a time. Recently, it has been shown how the high-level design of an entire PL, that includes all possible products, can be compactly represented as a single model in the SMV language, and model checked using the NuSMV tool. The implementation in NuSMV uses BDDs, a method that has been outperformed by SAT-based algorithms. In this paper we develop PL model checking using two leading SAT-based symbolic model checking algorithms: IMC and IC3. We describe the algorithms, prove their correctness, and report on our implementation. Evaluating our methods on three PL models from the literature, we demonstrate an improvement of up to 3 orders of magnitude over the existing BDD-based method.NSERC Discovery Grant, 155243-12 || NSERC / Automotive Partnership Canada, APCPJ 386797 - 09 || Ontario Research Fund, RE05-04

    Generalized abstraction-refinement for game-based CTL lifted model checking

    Get PDF
    cation areas ranging from embedded system domains to system-level software and communication protocols. Software Product Line methods and architectures allow effective building many custom variants of a software system in these domains. In many of the applications, their rigorous verification and quality assurance are of paramount importance. Lifted model checking for system families is capable of verifying all their variants simultaneously in a single run by exploiting the similarities between the variants. The computational cost of lifted model checking still greatly depends on the number of variants (the size of configuration space), which is often huge. Variability abstractions have successfully addressed this configuration space explosion problem, giving rise to smaller abstract variability models with fewer abstract configurations. Abstract variability models are given as modal transition systems, which contain may (over-approximating) and must (under-approximating) transitions. Thus, they preserve both universal and existential CTL properties. In this work, we bring two main contributions. First, we define a novel game-based approach for variability-specific abstraction and refinement for lifted model checking of the full CTL, interpreted over 3-valued semantics. We propose a direct algorithm for solving a 3-valued (abstract) lifted model checking game. In case the result of model checking an abstract variability model is indefinite, we suggest a new notion of refinement, which eliminates indefinite results. This provides an iterative incremental variability-specific abstraction and refinement framework, where refinement is applied only where indefinite results exist and definite results from previous iterations are reused. Second, we propose a new generalized definition of abstract variability models, given as so-called generalized modal transition systems, by introducing the notion of (must) hyper-transitions. This results in more precise abstract models in which more CTL formulae can be proved or disproved. We integrate the newly defined generalized abstract variability models in the existing abstraction-refinement framework for game-based lifted model checking of CTL. Finally, we evaluate the practicality of this approach on several system families

    Global public health security and justice for vaccines and therapeutics in the COVID-19 pandemic.

    Get PDF
    A Lancet Commission for COVID-19 task force is shaping recommendations to achieve vaccine and therapeutics access, justice, and equity. This includes ensuring safety and effectiveness harmonized through robust systems of global pharmacovigilance and surveillance. Global production requires expanding support for development, manufacture, testing, and distribution of vaccines and therapeutics to low- and middle-income countries (LMICs). Global intellectual property rules must not stand in the way of research, production, technology transfer, or equitable access to essential health tools, and in context of pandemics to achieve increased manufacturing without discouraging innovation. Global governance around product quality requires channelling widely distributed vaccines through WHO prequalification (PQ)/emergency use listing (EUL) mechanisms and greater use of national regulatory authorities. A World Health Assembly (WHA) resolution would facilitate improvements and consistency in quality control and assurances. Global health systems require implementing steps to strengthen national systems for controlling COVID-19 and for influenza vaccinations for adults including pregnant and lactating women. A collaborative research network should strive to establish open access databases for bioinformatic analyses, together with programs directed at human capacity utilization and strengthening. Combating anti-science recognizes the urgency for countermeasures to address a global-wide disinformation movement dominating the internet and infiltrating parliaments and local governments

    A predictive algorithm using clinical and laboratory parameters may assist in ruling out and in diagnosing MDS

    Get PDF
    We present a noninvasive Web-based app to help exclude or diagnose myelodysplastic syndrome (MDS), a bone marrow (BM) disorder with cytopenias and leukemic risk, diagnosed by BM examination. A sample of 502 MDS patients from the European MDS (EUMDS) registry (n \gt; 2600) was combined with 502 controls (all BM proven). Gradient-boosted models (GBMs) were used to predict/exclude MDS using demographic, clinical, and laboratory variables. Area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were used to evaluate the models, and performance was validated using 100 times fivefold cross-validation. Model stability was assessed by repeating its fit using different randomly chosen groups of 502 EUMDS cases. AUC was 0.96 (95\ 0.95-0.97). MDS is predicted/excluded accurately in 86\range, 0-1) of less than 0.68 (GBM \lt; 0.68) resulted in a negative predictive value of 0.94, that is, MDS was excluded. GBM ≥ 0.82 provided a positive predictive value of 0.88, that is, MDS. The diagnosis of the remaining patients (0.68 ≤ GBM \lt; 0.82) is indeterminate. The discriminating variables: age, sex, hemoglobin, white blood cells, platelets, mean corpuscular volume, neutrophils, monocytes, glucose, and creatinine. A Web-based app was developed; physicians could use it to exclude or predict MDS noninvasively in most patients without a BM examination. Future work will add peripheral blood cytogenetics/genetics, EUMDS-based prospective validation, and prognostication

    A Framework for Ranking Vacuity Results

    No full text
    Abstract. Vacuity detection is a method for finding errors in the modelchecking process when the specification is found to hold in the model. Most vacuity algorithms are based on checking the effect of applying mutations on the specification. It has been recognized that vacuity results differ in their significance. While in many cases vacuity results are valued as highly informative, there are also cases in which the results are viewed as meaningless by users. As of today, there is no study about ranking vacuity results according to their level of importance, and there is no formal framework or algorithms for defining and finding such ranks. The lack of framework often causes designers to ignore vacuity information altogether, potentially causing real problems to be overlooked. We suggest and study such a framework, based on the probability of the mutated specification to hold in a random computation. For example, two natural mutations of the specification G(req → F ready) are G(¬req) and GF ready. It is agreed that vacuity information about satisfying the first mutation is more alarming than information about satisfying the second. Our methodology formally explains this, as the probability of G(¬req) to hold in a random computation is 0, whereas the probability of GF ready is 1. From a theoretical point of view, our contribution includes a study of the problem of finding the probability of LTL formulas to be satisfied in a random computation and the existence and use of 0/1-laws for fragments of LTL. From a practical point of view, we propose an efficient algorithm for estimating the probability of LTL formulas, and argue that ranking vacuity results according to our probability-based criteria corresponds to our intuition about their level of importance.
    • …
    corecore