9 research outputs found

    Optimizing Majority-Inverter Graphs with Functional Hashing

    Get PDF
    A Majority-Inverter Graph (MIG) is a recently introduced logic representation form whose algebraic and Boolean properties allow for efficient logic optimization. In particular, when considering logic depth reduction, MIG algorithms obtained significantly superior synthesis results as compared to the state-of-the-art approaches based on AND-inverter graphs and commercial tools. In this paper, we present a new MIG optimization algorithm targeting size minimization based on functional hashing. The proposed algorithm makes use of minimum MIG representations which are precomputed for functions up to 4 variables using an approach based on Satisfiability Modulo Theories (SMT). Experimental results show that heavily-optimized MIGs can be further minimized also in size, thanks to our proposed methodology. When using the optimized MIGs as starting point for technology mapping, we were able to improve both depth and area for the arithmetic instances of the EPFL benchmarks beyond the current results achievable by state-of- the-art logic synthesis algorithms

    Busy Man’s Synthesis: Combinational Delay Optimization With SAT

    Get PDF
    Boolean SAT solving can be used to find a minimum- size logic network for a given small Boolean function. This paper extends the SAT formulation to find a minimum-size network under delay constraints. Delay constraints are given in terms of input arrival times and the maximum depth. After integration into a depth-optimizing mapping algorithm, the proposed SAT formulation can be used to perform logic rewriting to reduce the logic depth of a network. It is shown that to be effective the logic rewriting algorithm requires (i) a fast SAT formulation and (ii) heuristics to quickly determine whether the given delay constraints are feasible for a given function. The proposed algorithm is more versatile than previous algorithms, which is confirmed by the experimental results

    Automated abstraction by incremental refinement in interpolant-based model checking

    Full text link
    Abstract—This paper addresses the field of Unbounded Model Checking (UMC) based on SAT engines, where Craig interpolants have recently gained wide acceptance as an automated abstraction technique. We start from the observation that interpolants can be quite effective on large verification instances. As they operate on SAT-generated refutation proofs, interpolants are very good at automatically abstract facts that are not significant for proofs. In this work, we push forward the new idea of generating abstractions without resorting to SAT proofs, and to accept (reject) abstractions whenever they (do not) fulfill given adequacy constraints. We propose an integrated approach smoothly combining the capabilities of interpolation with abstraction and over-approximation techniques, that do not directly derive from SAT refutation proofs. The driving idea of this combination is to incrementally generate, by refinement, an abstract (over-approximate) image, built up from equivalences, implications, ternary and localization abstraction, then (eventually) from SAT refutation proofs. Experimental results, derived from the verification of hard problems, show the robustness of our approach

    Partitioning Interpolant-Based Verificationfor effective Unbounded Model Checking

    Get PDF
    Interpolant-based model checking has been shown to be effective on large verification instances, as it efficiently combines automated abstraction and reachability fixed-point checks. On the other hand, methods based on variable quantification have proved their ability to remove free inputs, thus projecting the search space over state variables. In this paper we propose an integrated approach which combines the abstraction power of interpolation with techniques that rely on AIG and/or BDD representations of states, directly supporting variable quantification and fixed-point checks. The underlying idea of this combination is to adopt AIG- or BDD-based quantifications to limit and restrict the search space and the complexity of the interpolant-based approach. The exploited strategies, most of which are individually well-known, are integrated with a new flavor, specifically designed to improve their effectiveness on difficult verification instances. Experimental results, specifically oriented to hard-to-solve verification problems, show the robustness of our approach

    DAG-Aware Circuit Compression For Formal Verification

    No full text
    The choice of representation for circuits and boolean formulae in a formal verification tool is important for two reasons. First of all, representation compactness is necessary in order to keep the memory consumption low. This is witnessed by the importance of maximum processable design size for equivalence checkers. Second, many formal verification algorithms are sensitive to redundancies in the design that is processed. To address these concerns, three different auto-compressing representations for boolean circuit networks and formulas have been suggested in the literature. In this paper, we attempt to find a blend of features from these alternatives that will allow us to remove as much redundancy as possible while not sacrificing runtime. By studying how the network representation size varies when we change parameters, we show that the use of only one operator node is suboptimal, and demonstrate that the most powerful of the proposed reduction rules, two-level minimization, actually can be harmful. We correct the bad behavior of two-level optimization by devising a simple linear simplification algorithm that can remove tens of thousands of nodes on examples where all obvious redundancies already have been removed. The combination of our compactor with the simplest representation outperforms all of the alternatives we have studied, with a theoretical runtime bound that is at least as good as the three studied representations

    Effective SAT solving

    Get PDF
    A growing number of problem domains are successfully being tackled by SAT solvers. This thesis contributes to that trend by pushing the state-of-the-art of core SAT algorithms and their implementation, but also in several important application areas. It consists of five papers: the first details the implementation of the SAT solver MiniSat and the other four papers discuss specific issues related to different application domains. In the first paper, catering to the trend of extending and adapting SAT solvers, we present a detailed description of MiniSat, a SAT solver designed for that particular purpose. The description additionally bridges a gap between theory and practice, serving as a tutorial on modern SAT solving algorithms. Among other things, we describe how to solve a series of related SAT problems efficiently, called incremental SAT solving. For finding finite first order models the MACE-style method that is based on SAT solving is well-known. In the second paper we improve the basic method with several techniques that can be loosely classified as either transformations that make the reduction to SAT result in fewer clauses or techniques that are designed to speed up the search of the SAT solver. The resulting tool, called Paradox, won the SAT/Models division of the CASC competition in 2003 and has not been beaten since by a single general purpose model finding tool. In the last decade the interest in methods for safety property verification that are based on SAT solving has been steadily growing. One example of such a method is temporal induction. The method requires a sequence of increasingly stronger induction proofs to be performed. In the third paper we show how this sequence of proofs can be solved efficiently using incremental SAT solving. The last two papers consider two frequently occurring types of encodings: (1) the problem of encoding circuits into CNF, and (2) encoding 0-1 integer linear programming into CNF and how to use incremental SAT to solve the intended ptimization problem. There are several encoding patterns that occur over and over again in this thesis but also elsewhere. The most noteworthy are: incremental SAT, lazy encoding of constraints, and bit-wise encoding of arithmetic influenced by hardware designs for adders and multipliers. The general conclusion is: deploying SAT solvers effectively requires implementations that are efficient, yet easily adaptable to specific application needs. Moreover, to get the best results, it is worth spending effort to make sure that one uses the best codings possible for an application. However, it is important to note that this is not absolutely necessary. For some applications naive problem codings work just fine which is indeed part of the appeal of using SAT solving

    DAG-Aware Circuit Compression For Formal Verification Per Bjesse

    No full text
    Abstract — The choice of representation for circuits and boolean formulae in a formal verification tool is important for two reasons. First of all, representation compactness is necessary in order to keep the memory consumption low. This is witnessed by the importance of maximum processable design size for equivalence checkers. Second, many formal verification algorithms are sensitive to redundancies in the design that is processed. To address these concerns, three different autocompressing representations for boolean circuit networks and formulas have been suggested in the literature. In this paper, we attempt to find a blend of features from these alternatives that will allow us to remove as much redundancy as possible while not sacrificing runtime. By studying how the network representation size varies when we change parameters, we show that the use of only one operator node is suboptimal, and demonstrate that the most powerful of the proposed reduction rules, two-level minimization, actually can be harmful. We correct the bad behavior of two-level optimization by devising a simple linear simplification algorithm that can remove tens of thousands of nodes on examples where all obvious redundancies already have been removed. The combination of our compactor with the simplest representation outperforms all of the alternatives we have studied, with a theoretical runtime bound that is at least as good as the three studied representations. I
    corecore