700 research outputs found
On applying the set covering model to reseeding
The Functional BIST approach is a rather new BIST technique based on exploiting embedded system functionality to generate deterministic test patterns during BIST. The approach takes advantages of two well-known testing techniques, the arithmetic BIST approach and the reseeding method. The main contribution of the present paper consists in formulating the problem of an optimal reseeding computation as an instance of the set covering problem. The proposed approach guarantees high flexibility, is applicable to different functional modules, and, in general, provides a more efficient test set encoding then previous techniques. In addition, the approach shorts the computation time and allows to better exploiting the tradeoff between area overhead and global test length as well as to deal with larger circuits
Test Generation Based on CLP
Functional ATPGs based on simulation are fast,
but generally, they are unable to cover corner cases, and
they cannot prove untestability. On the contrary, functional
ATPGs exploiting formal methods, being exhaustive,
cover corner cases, but they tend to suffer of the state
explosion problem when adopted for verifying large designs.
In this context, we have defined a functional ATPG
that relies on the joint use of pseudo-deterministic simulation
and Constraint Logic Programming (CLP), to
generate high-quality test sequences for solving complex
problems. Thus, the advantages of both simulation-based
and static-based verification techniques are preserved, while
their respective drawbacks are limited. In particular, CLP,
a form of constraint programming in which logic programming
is extended to include concepts from constraint satisfaction,
is well-suited to be jointly used with simulation. In
fact, information learned during design exploration by simulation
can be effectively exploited for guiding the search of
a CLP solver towards DUV areas not covered yet. The test
generation procedure relies on constraint logic programming
(CLP) techniques in different phases of the test generation
procedure.
The ATPG framework is composed of three functional
ATPG engines working on three different models of the
same DUV: the hardware description language (HDL)
model of the DUV, a set of concurrent EFSMs extracted
from the HDL description, and a set of logic constraints
modeling the EFSMs. The EFSM paradigm has been selected
since it allows a compact representation of the DUV
state space that limits the state explosion problem typical
of more traditional FSMs. The first engine is randombased,
the second is transition-oriented, while the last is
fault-oriented.
The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition
coverage is desired as a necessary condition for fault
detection, while the bit coverage functional fault model
is used to evaluate the effectiveness of the generated test
patterns by measuring the related fault coverage.
A random engine is first used to explore the DUV state
space by performing a simulation-based random walk. This
allows us to quickly fire easy-to-traverse (ETT) transitions
and, consequently, to quickly cover easy-to-detect (ETD)
faults. However, the majority of hard-to-traverse (HTT)
transitions remain, generally, uncovered.
Thus, a transition-oriented engine is applied to
cover the remaining HTT transitions by exploiting a
learning/backjumping-based strategy.
The ATPG works on a special kind of EFSM, called
SSEFSM, whose transitions present the most uniformly
distributed probability of being activated and can be effectively
integrated to CLP, since it allows the ATPG to invoke
the constraint solver when moving between EFSM states.
A constraint logic programming-based (CLP) strategy is
adopted to deterministically generate test vectors that satisfy
the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver
is required to generate opportune values for PIs that enable
the SSEFSM to move across such a transition.
Moreover, backjumping, also known as nonchronological
backtracking, is a special kind of backtracking
strategy which rollbacks from an unsuccessful
situation directly to the cause of the failure. Thus,
the transition-oriented engine deterministically backjumps
to the source of failure when a transition, whose guard
depends on previously set registers, cannot be traversed.
Next it modifies the EFSM configuration to satisfy the
condition on registers and successfully comes back to the
target state to activate the transition.
The transition-oriented engine generally allows us to
achieve 100% transition coverage. However, 100% transition
coverage does not guarantee to explore all DUV corner
cases, thus some hard-to-detect (HTD) faults can escape
detection preventing the achievement of 100% fault coverage.
Therefore, the CLP-based fault-oriented engine is finally
applied to focus on the remaining HTD faults.
The CLP solver is used to deterministically search for
sequences that propagate the HTD faults observed, but not
detected, by the random and the transition-oriented engine.
The fault-oriented engine needs a CLP-based representation
of the DUV, and some searching functions to generate
test sequences. The CLP-based representation is automatically
derived from the S2EFSM models according to the
defined rules, which follow the syntax of the ECLiPSe CLP
solver. This is not a trivial task, since modeling the
evolution in time of an EFSM by using logic constraints
is really different with respect to model the same behavior
by means of a traditional HW description language. At
first, the concept of time steps is introduced, required to
model the SSEFSM evolution through the time via CLP.
Then, this study deals with modeling of logical variables
and constraints to represent enabling functions and update
functions of the SSEFSM.
Formal tools that exhaustively search for a solution frequently
run out of resources when the state space to be analyzed
is too large. The same happens for the CLP solver,
when it is asked to find a propagation sequence on large sequential
designs. Therefore we have defined a set of strategies
that allow to prune the search space and to manage the
complexity problem for the solver
Recommended from our members
Testability considerations for implementing an embedded memory subsystem
textThere are a number of testability considerations for VLSI design,
but test coverage, test time, accuracy of test patterns and
correctness of design information for DFD (Design for debug) are
the most important ones in design with embedded memories. The goal
of DFT (Design-for-Test) is to achieve zero defects. When it comes
to the memory subsystem in SOCs (system on chips), many flavors of
memory BIST (built-in self test) are able to get high test
coverage in a memory, but often, no proper attention is given to
the memory interface logic (shadow logic). Functional testing and
BIST are the most prevalent tests for this logic, but functional
testing is impractical for complicated SOC designs. As a result,
industry has widely used at-speed scan testing to detect delay
induced defects. Compared with functional testing, scan-based
testing for delay faults reduces overall pattern generation
complexity and cost by enhancing both controllability and
observability of flip-flops. However, without proper modeling of
memory, Xs are generated from memories. Also, when the design has
chip compression logic, the number of ATPG patterns is increased
significantly due to Xs from memories. In this dissertation, a
register based testing method and X prevention logic are presented
to tackle these problems.
An important design stage for scan based testing with memory
subsystems is the step to create a gate level model and verify
with this model. The flow needs to provide a robust ATPG netlist
model. Most industry standard CAD tools used to analyze fault
coverage and generate test vectors require gate level models.
However, custom embedded memories are typically designed using a
transistor-level flow, there is a need for an abstraction step to
generate the gate models, which must be equivalent to the actual
design (transistor level). The contribution of the research is a
framework to verify that the gate level representation of custom
designs is equivalent to the transistor-level design.
Compared to basic stuck-at fault testing, the number of patterns
for at-speed testing is much larger than for basic stuck-at fault
testing. So reducing test and data volume are important. In this
desertion, a new scan reordering method is introduced to reduce
test data with an optimal routing solution. With in depth
understanding of embedded memories and flows developed during the
study of custom memory DFT, a custom embedded memory Bit Mapping
method using a symbolic simulator is presented in the last chapter
to achieve high yield for memories.Electrical and Computer Engineerin
Recommended from our members
Scalable algorithms for software based self test using formal methods
textTransistor scaling has kept up with Moore's law with a doubling of the number of transistors on a chip. More logic on a chip means more opportunities for manufacturing defects to slip in. This, in turn, has made processor testing after manufacturing a significant challenge. At-speed functional testing, being completely non-intrusive, has been seen as the ideal way of testing chips. However for processor testing, generating instruction level tests for covering all faults is a challenge given the issue of scalability. Data-path faults are relatively easier to control and observe compared to control-path faults. In this research we present a novel method to generate instruction level tests for hard to detect control-path faults in a processor. We initially map the gate level stuck-at fault to the Register Transfer Level (RTL) and build an equivalent faulty RTL model. The fault activation and propagation constraints are captured using Control and Data Flow Graphs of the RTL as a Liner Temporal Logic (LTL) property. This LTL property is then negated and given to a Bounded Model Checker based on a Bit-Vector Satisfiability Module Theories (SMT) solver. From the counter-example to the property we can extract a sequence of instructions that activates the gate level fault and propagates the fault effect to one of the observable points in the design. Other than the user supplying instruction constraints, this approach is completely automatic and does not require any manual intervention. Not all the design behaviors are required to generate a test for a fault. We use this insight to scale our previous methodology further. Underapproximations are design abstractions that only capture a subset of the original design behaviors. The use of RTL for test generation affords us two types of under-approximations: bit-width reduction and operator approximation. These are abstractions that perform reductions based on semantics of the RTL design. We also explore structural reductions of the RTL, called path based search, where we search through error propagation paths incrementally. This approach increases the size of the test generation problem step by step. In this way the SMT solver searches through the state space piecewise rather than doing the entire search at once. Experimental results show that our methods are robust and scalable for generating functional tests for hard to detect faults.Electrical and Computer Engineerin
Effective Launch-to-Capture Power Reduction for LOS Scheme with Adjacent-Probability-Based X-Filling
It has become necessary to reduce power during LSI testing. Particularly, during at-speed testing, excessive power consumed during the Launch-To-Capture (LTC) cycle causes serious issues that may lead to the overkill of defect-free logic ICs. Many successful test generation approaches to reduce IR-drop and/or power supply noise during LTC for the launch-off capture (LOC) scheme have previously been proposed, and several of X-filling techniques have proven especially effective. With X-filling in the launch-off shift (LOS) scheme, however, adjacent-fill (which was originally proposed for shift-in power reduction) is used frequently. In this work, we propose a novel X-filling technique for the LOS scheme, called Adjacent-Probability-based X-Filling (AP-fill), which can reduce more LTC power than adjacent-fill. We incorporate AP-fill into a post-ATPG test modification flow consisting of test relaxation and X-filling in order to avoid the fault coverage loss and the test vector count inflation. Experimental results for larger ITC\u2799 circuits show that the proposed AP-fill technique can achieve a higher power reduction ratio than 0-fill, 1-fill, and adjacent-fill.2011 Asian Test Symposium, 20-23 November 2011, New Delhi, Indi
- …