10 research outputs found

    Chaining Test Cases for Reactive System Testing (extended version)

    Full text link
    Testing of synchronous reactive systems is challenging because long input sequences are often needed to drive them into a state at which a desired feature can be tested. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the time required for resetting is high. This paper presents an approach to discovering a test case chain---a single software execution that covers a group of test goals and minimises overall test execution time. Our technique targets the scenario in which test goals for the requirements are given as safety properties. We give conditions for the existence and minimality of a single test case chain and minimise the number of test chains if a single test chain is infeasible. We report experimental results with a prototype tool for C code generated from Simulink models and compare it to state-of-the-art test suite generators.Comment: extended version of paper published at ICTSS'1

    Strong Induction in Hardware Model Checking

    Get PDF
    Symbolic Model checking is a widely used technique for automated verification of both hardware and software systems. Unbounded SAT-based Symbolic Model Checking (SMC) algorithms are very popular in hardware verification. The principle of strong induction is one of the first techniques for SMC. While elegant and simple to apply, properties as such can rarely be proven using strong induction and when they can be strengthened, there is no effective strategy to guess the depth of induction. It has been mostly displaced by techniques that compute inductive strengthenings based on interpolation and property directed reachability (PDR). In this thesis, we prove that strong induction is more concise than induction. We then present kAvy, an SMC algorithm that effectively uses strong induction to guide interpolation and PDR-style incremental inductive invariant construction. Unlike pure strong induction, kAvy uses PDR-style generalization to compute and strengthen an inductive trace. Unlike pure PDR, kAvy uses relative strong induction to construct an inductive invariant. The depth of induction is adjusted dynamically by minimizing a proof of unsatisfiability. We have implemented kAvy within the Avy Model Checker and evaluated it on HWMCC instances. Our results show that kAvy is more effective than both Avy and PDR, and that using strong induction leads to faster running time and solving more instances. Further, on a class of benchmarks, called shift, kAvy is orders of magnitude faster than Avy, PDR and pure strong induction

    Exploiting Satisfiability Solvers for Efficient Logic Synthesis

    Get PDF
    Logic synthesis is an important part of electronic design automation (EDA) flows, which enable the implementation of digital systems. As the design size and complexity increase, the data structures and algorithms for logic synthesis must adapt and improve in order to keep pace and to maintain acceptable runtime and high-quality results. Large circuits were often represented using binary decision diagrams (BDDs) that were rapidly adopted by logic synthesis tools beginning in the 1980s. Nowadays, BDD-based algorithms are still enhanced, but the possibilities for improvement are somewhat saturated after some 35 years of research. Alternatively, the first EDA applications that exploit Boolean satisfiability (SAT) were developed in the 1990s. Despite the worst-case exponential runtime of SAT solvers, rapid progress in their performance enabled the creation of efficient SAT-based algorithms. Yet, logic synthesis started using SAT solvers more diffusely only in the last decade. Therefore, thorough research is still required both for exploiting SAT solvers and for encoding logic synthesis problems into SAT. Our main goal in this thesis is to facilitate and promote the further integration of SAT solvers into EDA by proposing and evaluating novel SAT-based algorithms that can be used as building blocks in logic synthesis tools. First, we propose a rapid algorithm for LEXSAT, which generates satisfying assignments in lexicographic order. We show that LEXSAT can bring canonicity, which guarantees the generation of unique results, when using SAT solvers in EDA applications. Next, we present a new SAT-based algorithm that progressively generates irredundant sums of products (SOPs), which still play a crucial role in many logic synthesis tools. Using LEXSAT, for the first time, we can generate canonical SAT-based SOPs that, much like BDD-based SOPs, are unique for a given function and variable order but could relax canonicity in order to improve speed and scalability. Unlike BDDs, due to its progressive nature, our algorithm can generate partial SOPs for applications that can work with incomplete circuit functionality. It is noteworthy that both LEXSAT and the SAT-based SOPs are applicable beyond logic synthesis and EDA. Finally, we focus on resubstitution, which reimplements a given Boolean function as a new function that depends on a set of existing functions called divisors. We propose the carving interpolation algorithm that, unlike the traditional Craig interpolation, forces the use of a specific divisor as an input of the new function. This is particularly useful for global circuit restructuring and for some synthesis-based engineering change order (ECO) algorithms. Furthermore, we compare two existing SAT-based methodologies for resubstitution, which are used for post-mapping logic optimisation. The first methodology combines SAT-based functional dependency checking and Craig interpolation that are also used for our carving interpolation; the second methodology is based on cube enumeration and is similar to the SAT-based SOP generation. The initial implementations of our novel SAT-based algorithms offer either better performance or new features, or both, compared to their state-of-the-art versions. As the results indicate, a further thorough development of SAT-based algorithms for logic synthesis, like the one performed for BDDs in the past, can help overcome existing limitations and keep up with growing designs and design complexity
    corecore