19 research outputs found

    Plug & Test at System Level via Testable TLM Primitives

    Get PDF
    With the evolution of Electronic System Level (ESL) design methodologies, we are experiencing an extensive use of Transaction-Level Modeling (TLM). TLM is a high-level approach to modeling digital systems where details of the communication among modules are separated from the those of the implementation of functional units. This paper represents a first step toward the automatic insertion of testing capabilities at the transaction level by definition of testable TLM primitives. The use of testable TLM primitives should help designers to easily get testable transaction level descriptions implementing what we call a "Plug & Test" design methodology. The proposed approach is intended to work both with hardware and software implementations. In particular, in this paper we will focus on the design of a testable FIFO communication channel to show how designers are given the freedom of trading-off complexity, testability levels, and cos

    Test Generation Based on CLP

    Get PDF
    Functional ATPGs based on simulation are fast, but generally, they are unable to cover corner cases, and they cannot prove untestability. On the contrary, functional ATPGs exploiting formal methods, being exhaustive, cover corner cases, but they tend to suffer of the state explosion problem when adopted for verifying large designs. In this context, we have defined a functional ATPG that relies on the joint use of pseudo-deterministic simulation and Constraint Logic Programming (CLP), to generate high-quality test sequences for solving complex problems. Thus, the advantages of both simulation-based and static-based verification techniques are preserved, while their respective drawbacks are limited. In particular, CLP, a form of constraint programming in which logic programming is extended to include concepts from constraint satisfaction, is well-suited to be jointly used with simulation. In fact, information learned during design exploration by simulation can be effectively exploited for guiding the search of a CLP solver towards DUV areas not covered yet. The test generation procedure relies on constraint logic programming (CLP) techniques in different phases of the test generation procedure. The ATPG framework is composed of three functional ATPG engines working on three different models of the same DUV: the hardware description language (HDL) model of the DUV, a set of concurrent EFSMs extracted from the HDL description, and a set of logic constraints modeling the EFSMs. The EFSM paradigm has been selected since it allows a compact representation of the DUV state space that limits the state explosion problem typical of more traditional FSMs. The first engine is randombased, the second is transition-oriented, while the last is fault-oriented. The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition coverage is desired as a necessary condition for fault detection, while the bit coverage functional fault model is used to evaluate the effectiveness of the generated test patterns by measuring the related fault coverage. A random engine is first used to explore the DUV state space by performing a simulation-based random walk. This allows us to quickly fire easy-to-traverse (ETT) transitions and, consequently, to quickly cover easy-to-detect (ETD) faults. However, the majority of hard-to-traverse (HTT) transitions remain, generally, uncovered. Thus, a transition-oriented engine is applied to cover the remaining HTT transitions by exploiting a learning/backjumping-based strategy. The ATPG works on a special kind of EFSM, called SSEFSM, whose transitions present the most uniformly distributed probability of being activated and can be effectively integrated to CLP, since it allows the ATPG to invoke the constraint solver when moving between EFSM states. A constraint logic programming-based (CLP) strategy is adopted to deterministically generate test vectors that satisfy the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver is required to generate opportune values for PIs that enable the SSEFSM to move across such a transition. Moreover, backjumping, also known as nonchronological backtracking, is a special kind of backtracking strategy which rollbacks from an unsuccessful situation directly to the cause of the failure. Thus, the transition-oriented engine deterministically backjumps to the source of failure when a transition, whose guard depends on previously set registers, cannot be traversed. Next it modifies the EFSM configuration to satisfy the condition on registers and successfully comes back to the target state to activate the transition. The transition-oriented engine generally allows us to achieve 100% transition coverage. However, 100% transition coverage does not guarantee to explore all DUV corner cases, thus some hard-to-detect (HTD) faults can escape detection preventing the achievement of 100% fault coverage. Therefore, the CLP-based fault-oriented engine is finally applied to focus on the remaining HTD faults. The CLP solver is used to deterministically search for sequences that propagate the HTD faults observed, but not detected, by the random and the transition-oriented engine. The fault-oriented engine needs a CLP-based representation of the DUV, and some searching functions to generate test sequences. The CLP-based representation is automatically derived from the S2EFSM models according to the defined rules, which follow the syntax of the ECLiPSe CLP solver. This is not a trivial task, since modeling the evolution in time of an EFSM by using logic constraints is really different with respect to model the same behavior by means of a traditional HW description language. At first, the concept of time steps is introduced, required to model the SSEFSM evolution through the time via CLP. Then, this study deals with modeling of logical variables and constraints to represent enabling functions and update functions of the SSEFSM. Formal tools that exhaustively search for a solution frequently run out of resources when the state space to be analyzed is too large. The same happens for the CLP solver, when it is asked to find a propagation sequence on large sequential designs. Therefore we have defined a set of strategies that allow to prune the search space and to manage the complexity problem for the solver

    A SAT Based Test Generation Method for Delay Fault Testing of Macro Based Circuits

    Full text link

    Formal Verification and In-Situ Test of Analog and Mixed-Signal Circuits

    Get PDF
    As CMOS technologies continuously scale down, designing robust analog and mixed-signal (AMS) circuits becomes increasingly difficult. Consequently, there are pressing needs for AMS design checking techniques, more specifically design verification and design for testability (DfT). The purpose of verification is to ensure that the performance of an AMS design meets its specification under process, voltage and temperature (PVT) variations and different working conditions, while DfT techniques aim at embedding testability into the design, by adding auxiliary circuitries for testing purpose. This dissertation focuses on improving the robustness of AMS designs in highly scaled technologies, by developing novel formal verification and in-situ test techniques. Compared with conventional AMS verification that relies more on heuristically chosen simulations, formal verification provides a mathematically rigorous way of checking the target design property. A formal verification framework is proposed that incorporates nonlinear SMT solving techniques and simulation exploration to efficiently verify the dynamic properties of AMS designs. A powerful Bayesian inference based technique is applied to dynamically tradeoff between the costs of simulation and nonlinear SMT. The feasibility and efficacy of the proposed methodology are demonstrated on the verification of lock time specification of a charge-pump PLL. The powerful and low-cost digital processing capabilities of today?s CMOS technologies are enabling many new in-situ test schemes in a mixed-signal environment. First, a novel two-level structure of GRO-PVDL is proposed for on-chip jitter testing of high-speed high-resolution applications with a gated ring oscillator (GRO) at the first level to provide a coarse measurement and a Vernier-style structure at the second level to further measure the residue from the first level with a fine resolution. With the feature of quantization noise shaping, an effective resolution of 0.8ps can be achieved using a 90nm CMOS technology. Second, the reconfigurability of recent all-digital PLL designs is exploited to provide in-situ output jitter test and diagnosis abilities under multiple parametric variations of key analog building blocks. As an extension, an in-situ test scheme is proposed to provide online testing for all-digital PLL based polar transmitters

    Doctor of Philosophy

    Get PDF
    dissertationWith the spread of internet and mobile devices, transferring information safely and securely has become more important than ever. Finite fields have widespread applications in such domains, such as in cryptography, error correction codes, among many others. In most finite field applications, the field size - and therefore the bit-width of the operands - can be very large. The high complexity of arithmetic operations over such large fields requires circuits to be (semi-) custom designed. This raises the potential for errors/bugs in the implementation, which can be maliciously exploited and can compromise the security of such systems. Formal verification of finite field arithmetic circuits has therefore become an imperative. This dissertation targets the problem of formal verification of hardware implementations of combinational arithmetic circuits over finite fields of the type F2k . Two specific problems are addressed: i) verifying the correctness of a custom-designed arithmetic circuit implementation against a given word-level polynomial specification over F2k ; and ii) gate-level equivalence checking of two different arithmetic circuit implementations. This dissertation proposes polynomial abstractions over finite fields to model and represent the circuit constraints. Subsequently, decision procedures based on modern computer algebra techniques - notably, Gr¨obner bases-related theory and technology - are engineered to solve the verification problem efficiently. The arithmetic circuit is modeled as a polynomial system in the ring F2k [x1, x2, · · · , xd], and computer algebrabased results (Hilbert's Nullstellensatz) over finite fields are exploited for verification. Using our approach, experiments are performed on a variety of custom-designed finite field arithmetic benchmark circuits. The results are also compared against contemporary methods, based on SAT and SMT solvers, BDDs, and AIG-based methods. Our tools can verify the correctness of, and detect bugs in, up to 163-bit circuits in F2163 , whereas contemporary approaches are infeasible beyond 48-bit circuits

    Improvement of hardware reliability with aging monitors

    Get PDF

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Encoding problems in logic synthesis

    Get PDF
    corecore