254 research outputs found

    Test-Signal Search for Mixed-Signal Cores in a System-on-Chip

    Get PDF
    The well-known approach towards testing mixed-signal cores is functional testing and basically measuring key parameters of the core. However, especially if performance requirements increase, and embedded cores are considered, functional testing becomes technically and economically less attractive. A more cost-effective approach could be accomplished by a combination of reduced functional tests and added structural tests. In addition, it will also improve the debugging facilities of cores. Basic problem remains the large computational effort for analogue structural testing. In this paper, we introduce the concept of Testability Transfer Function for both analogue as well as digital parts in a mixed-signal core. This opens new possibilities for efficient structural testing of embedded mixed-signal cores, thereby adding to\ud the quality of tests

    HybMT: Hybrid Meta-Predictor based ML Algorithm for Fast Test Vector Generation

    Full text link
    Testing an integrated circuit (IC) is a highly compute-intensive process. For today's complex designs, tests for many hard-to-detect faults are typically generated using deterministic test generation (DTG) algorithms. Machine Learning (ML) is being increasingly used to increase the test coverage and decrease the overall testing time. Such proposals primarily reduce the wasted work in the classic Path Oriented Decision Making (PODEM) algorithm without compromising on the test quality. With variants of PODEM, many times there is a need to backtrack because further progress cannot be made. There is thus a need to predict the best strategy at different points in the execution of the algorithm. The novel contribution of this paper is a 2-level predictor: the top level is a meta predictor that chooses one of several predictors at the lower level. We choose the best predictor given a circuit and a target net. The accuracy of the top-level meta predictor was found to be 99\%. This leads to a significantly reduced number of backtracking decisions compared to state-of-the-art ML-based and conventional solutions. As compared to a popular, state-of-the-art commercial ATPG tool, our 2-level predictor (HybMT) shows an overall reduction of 32.6\% in the CPU time without compromising on the fault coverage for the EPFL benchmark circuits. HybMT also shows a speedup of 24.4\% and 95.5\% over the existing state-of-the-art (the baseline) while obtaining equal or better fault coverage for the ISCAS'85 and EPFL benchmark circuits, respectively.Comment: 9 pages, 7 figures and 7 tables. Changes from the previous version: We performed more experiments with different regressor models and also proposed a new neural network model, HybNN. We report the results for the EPFL benchmark circuits (most recent and large) and compare our results against a popular commercial ATPG too

    Test Generation Based on CLP

    Get PDF
    Functional ATPGs based on simulation are fast, but generally, they are unable to cover corner cases, and they cannot prove untestability. On the contrary, functional ATPGs exploiting formal methods, being exhaustive, cover corner cases, but they tend to suffer of the state explosion problem when adopted for verifying large designs. In this context, we have defined a functional ATPG that relies on the joint use of pseudo-deterministic simulation and Constraint Logic Programming (CLP), to generate high-quality test sequences for solving complex problems. Thus, the advantages of both simulation-based and static-based verification techniques are preserved, while their respective drawbacks are limited. In particular, CLP, a form of constraint programming in which logic programming is extended to include concepts from constraint satisfaction, is well-suited to be jointly used with simulation. In fact, information learned during design exploration by simulation can be effectively exploited for guiding the search of a CLP solver towards DUV areas not covered yet. The test generation procedure relies on constraint logic programming (CLP) techniques in different phases of the test generation procedure. The ATPG framework is composed of three functional ATPG engines working on three different models of the same DUV: the hardware description language (HDL) model of the DUV, a set of concurrent EFSMs extracted from the HDL description, and a set of logic constraints modeling the EFSMs. The EFSM paradigm has been selected since it allows a compact representation of the DUV state space that limits the state explosion problem typical of more traditional FSMs. The first engine is randombased, the second is transition-oriented, while the last is fault-oriented. The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition coverage is desired as a necessary condition for fault detection, while the bit coverage functional fault model is used to evaluate the effectiveness of the generated test patterns by measuring the related fault coverage. A random engine is first used to explore the DUV state space by performing a simulation-based random walk. This allows us to quickly fire easy-to-traverse (ETT) transitions and, consequently, to quickly cover easy-to-detect (ETD) faults. However, the majority of hard-to-traverse (HTT) transitions remain, generally, uncovered. Thus, a transition-oriented engine is applied to cover the remaining HTT transitions by exploiting a learning/backjumping-based strategy. The ATPG works on a special kind of EFSM, called SSEFSM, whose transitions present the most uniformly distributed probability of being activated and can be effectively integrated to CLP, since it allows the ATPG to invoke the constraint solver when moving between EFSM states. A constraint logic programming-based (CLP) strategy is adopted to deterministically generate test vectors that satisfy the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver is required to generate opportune values for PIs that enable the SSEFSM to move across such a transition. Moreover, backjumping, also known as nonchronological backtracking, is a special kind of backtracking strategy which rollbacks from an unsuccessful situation directly to the cause of the failure. Thus, the transition-oriented engine deterministically backjumps to the source of failure when a transition, whose guard depends on previously set registers, cannot be traversed. Next it modifies the EFSM configuration to satisfy the condition on registers and successfully comes back to the target state to activate the transition. The transition-oriented engine generally allows us to achieve 100% transition coverage. However, 100% transition coverage does not guarantee to explore all DUV corner cases, thus some hard-to-detect (HTD) faults can escape detection preventing the achievement of 100% fault coverage. Therefore, the CLP-based fault-oriented engine is finally applied to focus on the remaining HTD faults. The CLP solver is used to deterministically search for sequences that propagate the HTD faults observed, but not detected, by the random and the transition-oriented engine. The fault-oriented engine needs a CLP-based representation of the DUV, and some searching functions to generate test sequences. The CLP-based representation is automatically derived from the S2EFSM models according to the defined rules, which follow the syntax of the ECLiPSe CLP solver. This is not a trivial task, since modeling the evolution in time of an EFSM by using logic constraints is really different with respect to model the same behavior by means of a traditional HW description language. At first, the concept of time steps is introduced, required to model the SSEFSM evolution through the time via CLP. Then, this study deals with modeling of logical variables and constraints to represent enabling functions and update functions of the SSEFSM. Formal tools that exhaustively search for a solution frequently run out of resources when the state space to be analyzed is too large. The same happens for the CLP solver, when it is asked to find a propagation sequence on large sequential designs. Therefore we have defined a set of strategies that allow to prune the search space and to manage the complexity problem for the solver

    Image Test Libraries for the on-line self-test of functional units in GPUs running CNNs

    Get PDF
    The widespread use of artificial intelligence (AI)-based systems has raised several concerns about their deployment in safety-critical systems. Industry standards, such as ISO26262 for automotive, require detecting hardware faults during the mission of the device. Similarly, new standards are being released concerning the functional safety of AI systems (e.g., ISO/IEC CD TR 5469). Hardware solutions have been proposed for the in-field testing of the hardware executing AI applications; however, when used in applications such as Convolutional Neural Networks (CNNs) in image processing tasks, their usage may increase the hardware cost and affect the application performances. In this paper, for the very first time, a methodology to develop high-quality test images, to be interleaved with the normal inference process of the CNN application is proposed. An Image Test Library (ITL) is developed targeting the on-line test of GPU functional units. The proposed approach does not require changing the actual CNN (thus incurring in costly memory loading operations) since it is able to exploit the actual CNN structure. Experimental results show that a 6-image ITL is able to achieve about 95\% of stuck-at test coverage on the floating-point multipliers in a GPU. The obtained ITL requires a very low test application time, as well as a very low memory space for storing the test images and the golden test responses

    RT-level fast fault simulator

    Get PDF
    In this paper a new fast fault simulation technique is presented for calculation of fault propagation through HLPs (High Level Primitives). ROTDDs (Reduced Ordered Ternary Decision Diagrams) are used to describe HLP modules. The technique is implemented in the HTDD RT-level fault simulator. The simulator is evaluated with some ITC99 benchmarks. A hypothesis is proved that a test set coverage of physical failures can be anticipated with high accuracy when RTL fault model takes into account optimization strategies that are used in CAE system applied
    • …
    corecore