1 research outputs found

    Model checking large design spaces: Theory, tools, and experiments

    Get PDF
    In the early stages of design, there are frequently many different models of the system under development constituting a design space. The different models arise out of a need to weigh different design choices, to check core capabilities of system versions with varying features, or to analyze a future version against previous ones in the product line. Every unique combinations of choices yields competing system models that differ in terms of assumptions, implementations, and configurations. Formal verification techniques, like model checking, can aid system development by systematically comparing the different models in terms of functional correctness, however, applying model checking off-the-shelf may not scale due to the large size of the design spaces for today’s complex systems. We present scalable algorithms for design-space exploration using model checking that enable exhaustive comparison of all competing models in large design spaces. Model checking a design space entails checking multiple models and properties. Given a formal representation of the design space and properties expressing system specifications, we present algorithms that automatically prune the design space by finding inter-model relationships and property dependencies. Our design-space reduction technique is compatible with off-the-shelf model checkers, and only requires checking a small subset of models and properties to provide verification results for every model-property pair in the original design space. We evaluate our methodology on case-studies from NASA and Boeing; our techniques offer up to 9.4× speedup compared to traditional approaches. We observe that sequential enumeration of the design space generates models with small incremental differences. Typical model-checking algorithms do not take advantage of this information; they end up re-verifying “already-explored” state spaces across models. We present algorithms that learn and reuse information from solving related models against a property in sequential model-checking runs. We formalize heuristics to maximize reuse between runs by efficient “hashing” of models. Extensive experiments show that information reuse boosts runtime performance of sequential model-checking by up to 5.48×. Model checking design spaces often mandates checking several properties on individual models. State-of-the-art tools do not optimally exploit subproblem sharing between properties, leaving an opportunity to save verification resource via concurrent verification of “nearly-identical” properties. We present a near-linear runtime algorithm for partitioning properties into provably high-affinity groups for individual model-checking tasks. The verification effort expended for one property in a group can be directly reused to accelerate the verification of the others. The high-affinity groups may be refined based on semantic feedback, to provide an optimal multi-property localization solution. Our techniques significantly improve multi-property model-checking performance, and often yield \u3e4.0× speedup. Building upon these ideas, we optimize parallel verification to maximize the benefits of our proposed techniques. Model checking tools utilize parallelism, either in portfolio mode where different algorithm strategies run concurrently, or in partitioning mode where disjoint property subsets are verified independently. However, both approaches often degrade into highly-redundant work across processes, or under-utilize available processes. We propose methods to minimize redundant computation, and dynamically optimize work distribution when checking multiple properties for individual models. Our techniques offer a median 2.4× speedup for complex parallel verification tasks with thousands of properties
    corecore