22 research outputs found

    Quality and Quantity in Robustness-Checking Using Formal Techniques

    Get PDF
    Fault tolerance is one of the main challenges for future technology scaling to tolerate transient faults. Various techniques at design level are available to catch and handle transient faults, e.g., Triple Modular Redundancy. An important but missing step is to verify the implementation of those techniques since the implementation might be buggy itself. The thesis is focusing on formally verifying digital circuits with respect to fault-tolerant aspects. It considers transient faults and basically checks whether these faults can influence the output behavior of sequential circuits for any kind of scenarios. As a result the designer is pin-pointed directly to critical parts of the design and gets a prove about the absence of faulty behavior for non-critical parts. The focus of the verification is completeness with respect to the analysis. Three issues need to be adequately addressed: 1) cover all input stimuli, 2) all possible transient faults, and, 3) all possibly exponential long (wrt. to number of state bits) propagation paths. All three issues are addressed in different engines. A tool called RobuCheck has been implemented and evaluated on different academic benchmarks from ITC'99 and industrial benchmarks from IBM

    Doctor of Philosophy

    Get PDF
    dissertationWith the spread of internet and mobile devices, transferring information safely and securely has become more important than ever. Finite fields have widespread applications in such domains, such as in cryptography, error correction codes, among many others. In most finite field applications, the field size - and therefore the bit-width of the operands - can be very large. The high complexity of arithmetic operations over such large fields requires circuits to be (semi-) custom designed. This raises the potential for errors/bugs in the implementation, which can be maliciously exploited and can compromise the security of such systems. Formal verification of finite field arithmetic circuits has therefore become an imperative. This dissertation targets the problem of formal verification of hardware implementations of combinational arithmetic circuits over finite fields of the type F2k . Two specific problems are addressed: i) verifying the correctness of a custom-designed arithmetic circuit implementation against a given word-level polynomial specification over F2k ; and ii) gate-level equivalence checking of two different arithmetic circuit implementations. This dissertation proposes polynomial abstractions over finite fields to model and represent the circuit constraints. Subsequently, decision procedures based on modern computer algebra techniques - notably, Gr¨obner bases-related theory and technology - are engineered to solve the verification problem efficiently. The arithmetic circuit is modeled as a polynomial system in the ring F2k [x1, x2, · · · , xd], and computer algebrabased results (Hilbert's Nullstellensatz) over finite fields are exploited for verification. Using our approach, experiments are performed on a variety of custom-designed finite field arithmetic benchmark circuits. The results are also compared against contemporary methods, based on SAT and SMT solvers, BDDs, and AIG-based methods. Our tools can verify the correctness of, and detect bugs in, up to 163-bit circuits in F2163 , whereas contemporary approaches are infeasible beyond 48-bit circuits

    Design, Analysis and Test of Logic Circuits under Uncertainty.

    Full text link
    Integrated circuits are increasingly susceptible to uncertainty caused by soft errors, inherently probabilistic devices, and manufacturing variability. As device technologies scale, these effects become detrimental to circuit reliability. In order to address this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer that uses functional-simulation signatures to capture error effects, 2) novel design techniques that improve reliability using little area and performance overhead, 3) a matrix-based reliability-analysis framework that captures many types of probabilistic faults, and 4) test-generation/compaction methods aimed at probabilistic faults in logic circuits. SER analysis must account for the main error-masking mechanisms in ICs: logic, timing, and electrical masking. We relate logic masking to node testability of the circuit and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute estability (signal probability and observability). To account for timing masking, we compute error-latching windows (ELWs) from timing analysis information. Electrical masking is incorporated into our estimates through derating factors for gate error probabilities. The SER of a circuit is computed by combining the effects of all three masking mechanisms within our SER analyzer called AnSER. Using AnSER, we develop several low-overhead techniques that increase reliability, including: 1) an SER-aware design method that uses redundancy already present within the circuit, 2) a technique that resynthesizes small logic windows to improve area and reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs. We develop the probabilistic transfer matrix (PTM) modeling framework to analyze effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs) to improve computational efficiency. Several ADD algorithms are developed to extract reliability and error susceptibility information from PTMs representing circuits. We propose new algorithms for circuit testing under probabilistic faults, which require a reformulation of existing test techniques. For instance, a test vector may need to be repeated many times to detect a fault. Also, different vectors detect the same fault with different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd

    Exploiting Satisfiability Solvers for Efficient Logic Synthesis

    Get PDF
    Logic synthesis is an important part of electronic design automation (EDA) flows, which enable the implementation of digital systems. As the design size and complexity increase, the data structures and algorithms for logic synthesis must adapt and improve in order to keep pace and to maintain acceptable runtime and high-quality results. Large circuits were often represented using binary decision diagrams (BDDs) that were rapidly adopted by logic synthesis tools beginning in the 1980s. Nowadays, BDD-based algorithms are still enhanced, but the possibilities for improvement are somewhat saturated after some 35 years of research. Alternatively, the first EDA applications that exploit Boolean satisfiability (SAT) were developed in the 1990s. Despite the worst-case exponential runtime of SAT solvers, rapid progress in their performance enabled the creation of efficient SAT-based algorithms. Yet, logic synthesis started using SAT solvers more diffusely only in the last decade. Therefore, thorough research is still required both for exploiting SAT solvers and for encoding logic synthesis problems into SAT. Our main goal in this thesis is to facilitate and promote the further integration of SAT solvers into EDA by proposing and evaluating novel SAT-based algorithms that can be used as building blocks in logic synthesis tools. First, we propose a rapid algorithm for LEXSAT, which generates satisfying assignments in lexicographic order. We show that LEXSAT can bring canonicity, which guarantees the generation of unique results, when using SAT solvers in EDA applications. Next, we present a new SAT-based algorithm that progressively generates irredundant sums of products (SOPs), which still play a crucial role in many logic synthesis tools. Using LEXSAT, for the first time, we can generate canonical SAT-based SOPs that, much like BDD-based SOPs, are unique for a given function and variable order but could relax canonicity in order to improve speed and scalability. Unlike BDDs, due to its progressive nature, our algorithm can generate partial SOPs for applications that can work with incomplete circuit functionality. It is noteworthy that both LEXSAT and the SAT-based SOPs are applicable beyond logic synthesis and EDA. Finally, we focus on resubstitution, which reimplements a given Boolean function as a new function that depends on a set of existing functions called divisors. We propose the carving interpolation algorithm that, unlike the traditional Craig interpolation, forces the use of a specific divisor as an input of the new function. This is particularly useful for global circuit restructuring and for some synthesis-based engineering change order (ECO) algorithms. Furthermore, we compare two existing SAT-based methodologies for resubstitution, which are used for post-mapping logic optimisation. The first methodology combines SAT-based functional dependency checking and Craig interpolation that are also used for our carving interpolation; the second methodology is based on cube enumeration and is similar to the SAT-based SOP generation. The initial implementations of our novel SAT-based algorithms offer either better performance or new features, or both, compared to their state-of-the-art versions. As the results indicate, a further thorough development of SAT-based algorithms for logic synthesis, like the one performed for BDDs in the past, can help overcome existing limitations and keep up with growing designs and design complexity

    Speeding up hardware verification by automated data path scaling

    Get PDF
    The main obstacle for formal hardware verification of digital circuits is formed by ever increasing design sizes and circuit complexity. Therefore, reducing runtimes and the amount of memory needed for verification computations is a major requirement in order to successfully apply formal verification techniques to industrial circuit designs. This thesis presents a new abstraction technique for Bounded Model Checking of digital circuits. The proposed technique reduces the sizes of design models before verification by implementing a fully automated scaling of data path widths. Digital circuit designs are usually given as Register-Transfer-Level (RTL) specifications, but most industrial hardware verification tools are based on bit-level methods such as SAT or BDD techniques. RTL specifications contain explicit structural information about high-level data flow and about the widths of data path signals. We introduce a one-to-one abstraction technique for RTL property checking which exploits such high-level information. Given an RTL circuit description, a scaled-down abstract model is computed in which signal widths are reduced with respect to an additionally given formal property. For each data path and each signal accessing the data path, the original width n is shrunken to the minimum width m<=n such that the property which is to be checked holds for the scaled model if and only if it holds for the original design. We furthermore provide a technique which, if the property does not hold, computes counterexamples for the original design from counterexamples found on the reduced model. Thus, the verification task can be completely carried out on the scaled-down version of the design; false-negatives cannot occur. The complexity of SAT and BDD based model checking often depends on the number of bits occurring in a design and therefore depends on the widths of the data path signals. Linear signal width reductions result in exponentially smaller state spaces. Hence, automated data path scaling can significantly speed up the runtimes of verification tools and allows larger design sizes to be handled. Experimental results on large industrial circuits demonstrate the applicability and efficiency of the proposed method

    Design and application of reconfigurable circuits and systems

    No full text
    Open Acces

    A Functional Verification Methodology for an Improved Coverage of System-on-Chips

    Get PDF
    The increasing popularity of System-on-Chip (SoC) circuits results in many new design challenges. One major challenge is to ensure the functional correctness of such complicated circuits. Functional verification is a verification technique used to verify the functional correctness of SoCs. Coverage Directed Test Generation (CDTG) is an essential part of functional verification, where the objective is to generate input stimulus that maximize the coverage of a design. Coverage helps to determine how well the input stimulus verified the design under verification. CDTG techniques analyze coverage results and adapt the input stimulus generation process to improve the coverage. One of the important component of CDTG based tools is the constraint solver. The time efficiency of the verification process depends on the efficiency of the solver. But the constraint solvers associated with CDTG tools require large amount of memory and time to generate input stimuli for SoCs. The solvers cannot generate solutions which are evenly distributed in search space, in order to attain the required coverage. The aim of this thesis is to provide a practical framework that enables the generation of evenly distributed input stimuli. A basic feature of the search space (data set) is that it contains k sub populations or clusters. Partitioning the search space into clusters and generating solutions from the partitions can improve the evenness of the solutions generated by the solver. Hence one of our main contribution is a novel domain partitioning algorithm. The domain partitioning algorithm relies on solution generated by a consistency search algorithm developed for our purpose. The number of partitions (required by the domain partitioning algorithm) is determined by using an algorithm which can find the optimal number of clusters present in a data set. To demonstrate the effectiveness of our approach, we apply our methodology on Constraint Satisfaction Problems (CSPs) and some real life applications
    corecore