30 research outputs found

    Logics for digital circuit verification : theory, algorithms, and applications

    Get PDF

    Probabilistic representation and manipulation of Boolean functions using free Boolean diagrams

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 145-149).by Amelia Huimin Shen.Ph.D

    New Data Structures and Algorithms for Logic Synthesis and Verification

    Get PDF
    The strong interaction between Electronic Design Automation (EDA) tools and Complementary Metal-Oxide Semiconductor (CMOS) technology contributed substantially to the advancement of modern digital electronics. The continuous downscaling of CMOS Field Effect Transistor (FET) dimensions enabled the semiconductor industry to fabricate digital systems with higher circuit density at reduced costs. To keep pace with technology, EDA tools are challenged to handle both digital designs with growing functionality and device models of increasing complexity. Nevertheless, whereas the downscaling of CMOS technology is requiring more complex physical design models, the logic abstraction of a transistor as a switch has not changed even with the introduction of 3D FinFET technology. As a consequence, modern EDA tools are fine tuned for CMOS technology and the underlying design methodologies are based on CMOS logic primitives, i.e., negative unate logic functions. While it is clear that CMOS logic primitives will be the ultimate building blocks for digital systems in the next ten years, no evidence is provided that CMOS logic primitives are also the optimal basis for EDA software. In EDA, the efficiency of methods and tools is measured by different metrics such as (i) the result quality, for example the performance of a digital circuit, (ii) the runtime and (iii) the memory footprint on the host computer. With the aim to optimize these metrics, the accordance to a specific logic model is no longer important. Indeed, the key to the success of an EDA technique is the expressive power of the logic primitives handling and solving the problem, which determines the capability to reach better metrics. In this thesis, we investigate new logic primitives for electronic design automation tools. We improve the efficiency of logic representation, manipulation and optimization tasks by taking advantage of majority and biconditional logic primitives. We develop synthesis tools exploiting the majority and biconditional expressiveness. Our tools show strong results as compared to state-of-the-art academic and commercial synthesis tools. Indeed, we produce the best results for several public benchmarks. On top of the enhanced synthesis power, our methods are the natural and native logic abstraction for circuit design in emerging nanotechnologies, where majority and biconditional logic are the primitive gates for physical implementation. We accelerate formal methods by (i) studying properties of logic circuits and (ii) developing new frameworks for logic reasoning engines. We prove non-trivial dualities for the property checking problem in logic circuits. Our findings enable sensible speed-ups in solving circuit satisfiability. We develop an alternative Boolean satisfiability framework based on majority functions. We prove that the general problem is still intractable but we show practical restrictions that can be solved efficiently. Finally, we focus on reversible logic where we propose a new equivalence checking approach. We exploit the invertibility of computation and the functionality of reversible gates in the formulation of the problem. This enables one order of magnitude speed up, as compared to the state-of-the-art solution. We argue that new approaches to solve EDA problems are necessary, as we have reached a point of technology where keeping pace with design goals is tougher than ever

    Exploiting Satisfiability Solvers for Efficient Logic Synthesis

    Get PDF
    Logic synthesis is an important part of electronic design automation (EDA) flows, which enable the implementation of digital systems. As the design size and complexity increase, the data structures and algorithms for logic synthesis must adapt and improve in order to keep pace and to maintain acceptable runtime and high-quality results. Large circuits were often represented using binary decision diagrams (BDDs) that were rapidly adopted by logic synthesis tools beginning in the 1980s. Nowadays, BDD-based algorithms are still enhanced, but the possibilities for improvement are somewhat saturated after some 35 years of research. Alternatively, the first EDA applications that exploit Boolean satisfiability (SAT) were developed in the 1990s. Despite the worst-case exponential runtime of SAT solvers, rapid progress in their performance enabled the creation of efficient SAT-based algorithms. Yet, logic synthesis started using SAT solvers more diffusely only in the last decade. Therefore, thorough research is still required both for exploiting SAT solvers and for encoding logic synthesis problems into SAT. Our main goal in this thesis is to facilitate and promote the further integration of SAT solvers into EDA by proposing and evaluating novel SAT-based algorithms that can be used as building blocks in logic synthesis tools. First, we propose a rapid algorithm for LEXSAT, which generates satisfying assignments in lexicographic order. We show that LEXSAT can bring canonicity, which guarantees the generation of unique results, when using SAT solvers in EDA applications. Next, we present a new SAT-based algorithm that progressively generates irredundant sums of products (SOPs), which still play a crucial role in many logic synthesis tools. Using LEXSAT, for the first time, we can generate canonical SAT-based SOPs that, much like BDD-based SOPs, are unique for a given function and variable order but could relax canonicity in order to improve speed and scalability. Unlike BDDs, due to its progressive nature, our algorithm can generate partial SOPs for applications that can work with incomplete circuit functionality. It is noteworthy that both LEXSAT and the SAT-based SOPs are applicable beyond logic synthesis and EDA. Finally, we focus on resubstitution, which reimplements a given Boolean function as a new function that depends on a set of existing functions called divisors. We propose the carving interpolation algorithm that, unlike the traditional Craig interpolation, forces the use of a specific divisor as an input of the new function. This is particularly useful for global circuit restructuring and for some synthesis-based engineering change order (ECO) algorithms. Furthermore, we compare two existing SAT-based methodologies for resubstitution, which are used for post-mapping logic optimisation. The first methodology combines SAT-based functional dependency checking and Craig interpolation that are also used for our carving interpolation; the second methodology is based on cube enumeration and is similar to the SAT-based SOP generation. The initial implementations of our novel SAT-based algorithms offer either better performance or new features, or both, compared to their state-of-the-art versions. As the results indicate, a further thorough development of SAT-based algorithms for logic synthesis, like the one performed for BDDs in the past, can help overcome existing limitations and keep up with growing designs and design complexity

    Deterministic, Efficient Variation of Circuit Components to Improve Resistance to Reverse Engineering

    Get PDF
    This research proposes two alternative methods for generating semantically equivalent circuit variants which leave the circuit\u27s internal structure pseudo-randomly determined. Component fusion deterministically selects subcircuits using a component identification algorithm and replaces them using a deterministic algorithm that generates canonical logic forms. Component encryption seeks to alter the semantics of individual circuit components using an encoding function, but preserves the overall circuit semantics by decoding signal values later in the circuit. Experiments were conducted to examine the performance of component fusion and component encryption against representative trials of subcircuit selection-and-replacement and Boundary Blurring, two previously defined methods for circuit obfuscation. Overall, results support the conclusion that both component fusion and component encryption generate more secure variants than previous methods and that these variants are more efficient in terms of required circuit delay and the power and area required for their implementation

    Model checking large design spaces: Theory, tools, and experiments

    Get PDF
    In the early stages of design, there are frequently many different models of the system under development constituting a design space. The different models arise out of a need to weigh different design choices, to check core capabilities of system versions with varying features, or to analyze a future version against previous ones in the product line. Every unique combinations of choices yields competing system models that differ in terms of assumptions, implementations, and configurations. Formal verification techniques, like model checking, can aid system development by systematically comparing the different models in terms of functional correctness, however, applying model checking off-the-shelf may not scale due to the large size of the design spaces for today’s complex systems. We present scalable algorithms for design-space exploration using model checking that enable exhaustive comparison of all competing models in large design spaces. Model checking a design space entails checking multiple models and properties. Given a formal representation of the design space and properties expressing system specifications, we present algorithms that automatically prune the design space by finding inter-model relationships and property dependencies. Our design-space reduction technique is compatible with off-the-shelf model checkers, and only requires checking a small subset of models and properties to provide verification results for every model-property pair in the original design space. We evaluate our methodology on case-studies from NASA and Boeing; our techniques offer up to 9.4× speedup compared to traditional approaches. We observe that sequential enumeration of the design space generates models with small incremental differences. Typical model-checking algorithms do not take advantage of this information; they end up re-verifying “already-explored” state spaces across models. We present algorithms that learn and reuse information from solving related models against a property in sequential model-checking runs. We formalize heuristics to maximize reuse between runs by efficient “hashing” of models. Extensive experiments show that information reuse boosts runtime performance of sequential model-checking by up to 5.48×. Model checking design spaces often mandates checking several properties on individual models. State-of-the-art tools do not optimally exploit subproblem sharing between properties, leaving an opportunity to save verification resource via concurrent verification of “nearly-identical” properties. We present a near-linear runtime algorithm for partitioning properties into provably high-affinity groups for individual model-checking tasks. The verification effort expended for one property in a group can be directly reused to accelerate the verification of the others. The high-affinity groups may be refined based on semantic feedback, to provide an optimal multi-property localization solution. Our techniques significantly improve multi-property model-checking performance, and often yield \u3e4.0× speedup. Building upon these ideas, we optimize parallel verification to maximize the benefits of our proposed techniques. Model checking tools utilize parallelism, either in portfolio mode where different algorithm strategies run concurrently, or in partitioning mode where disjoint property subsets are verified independently. However, both approaches often degrade into highly-redundant work across processes, or under-utilize available processes. We propose methods to minimize redundant computation, and dynamically optimize work distribution when checking multiple properties for individual models. Our techniques offer a median 2.4× speedup for complex parallel verification tasks with thousands of properties

    Approximate logic circuits: Theory and applications

    Get PDF
    CMOS technology scaling, the process of shrinking transistor dimensions based on Moore's law, has been the thrust behind increasingly powerful integrated circuits for over half a century. As dimensions are scaled to few tens of nanometers, process and environmental variations can significantly alter transistor characteristics, thus degrading reliability and reducing performance gains in CMOS designs with technology scaling. Although design solutions proposed in recent years to improve reliability of CMOS designs are power-efficient, the performance penalty associated with these solutions further reduces performance gains with technology scaling, and hence these solutions are not well-suited for high-performance designs. This thesis proposes approximate logic circuits as a new logic synthesis paradigm for reliable, high-performance computing systems. Given a specification, an approximate logic circuit is functionally equivalent to the given specification for a "significant" portion of the input space, but has a smaller delay and power as compared to a circuit implementation of the original specification. This contributions of this thesis include (i) a general theory of approximation and efficient algorithms for automated synthesis of approximations for unrestricted random logic circuits, (ii) logic design solutions based on approximate circuits to improve reliability of designs with negligible performance penalty, and (iii) efficient decomposition algorithms based on approxiiii mate circuits to improve performance of designs during logic synthesis. This thesis concludes with other potential applications of approximate circuits and identifies. open problems in logic decomposition and approximate circuit synthesis

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    corecore