94 research outputs found

    Variability Abstraction and Refinement for Game-Based Lifted Model Checking of Full CTL

    Get PDF
    One of the most promising approaches to fighting the configuration space explosion problem in lifted model checking are variability abstractions. In this work, we define a novel game-based approach for variability-specific abstraction and refinement for lifted model checking of the full CTL, interpreted over 3-valued semantics. We propose a direct algorithm for solving a 3-valued (abstract) lifted model checking game. In case the result of model checking an abstract variability model is indefinite, we suggest a new notion of refinement, which eliminates indefinite results. This provides an iterative incremental variability-specific abstraction and refinement framework, where refinement is applied only where indefinite results exist and definite results from previous iterations are reused. The practicality of this approach is demonstrated on several variability models

    Verification of Program Transformations with Inductive Refinement Types

    Get PDF
    International audienceHigh-level transformation languages like Rascal include expressive features for manipulating large abstract syntax trees: first-class traversals, expressive pattern matching, backtracking, and generalized iterators. We present the design and implementation of an abstract interpretation tool, Rabit, for verifying inductive type and shape properties for transformations written in such languages. We describe how to perform abstract interpretation based on operational semantics, specifically focusing on the challenges arising when analyzing the expressive traversals and pattern matching. Finally, we evaluate Rabit on a series of transformations (normalization, desugaring, refactoring, code generators, type inference, etc.) showing that we can effectively verify stated properties

    More Scalable LTL Model Checking via Discovering Design-Space Dependencies (D3)

    Get PDF
    Modern system design often requires comparing several models over a large design space. Different models arise out of a need to weigh different design choices, to check core capabilities of versions with varying features, or to analyze a future version against previous ones. Model checking can compare different models; however, applying model checking off-the-shelf may not scale due to the large size of the design space for today’s complex systems. We exploit relationships between different models of the same (or related) systems to optimize the model-checking search. Our algorithm, D3 , preprocesses the design space and checks fewer model-checking instances, e.g., using nuXmv. It automatically prunes the search space by reducing both the number of models to check, and the number of LTL properties that need to be checked for each model in order to provide the complete model-checking verdict for every individual model-property pair. We formalize heuristics that improve the performance of D3 . We demonstrate the scalability of D3 by extensive experimental evaluation, e.g., by checking 1,620 real-life models for NASA’s NextGen air traffic control system. Compared to checking each model-property pair individually, D3 is up to 9.4 × faster
    • …
    corecore