1,805 research outputs found

    What is the Path to Fast Fault Simulation?

    Get PDF
    Motivated by the recent advances in fast fault simulation techniques for large combinational circuits, a panel discussion has been organized for the 1988 International Test Conference. This paper is a collective account of the position statements offered by the panelists

    New Techniques to Reduce the Execution Time of Functional Test Programs

    Get PDF
    The compaction of test programs for processor-based systems is of utmost practical importance: Software-Based Self-Test (SBST) is nowadays increasingly adopted, especially for in-field test of safety-critical applications, and both the size and the execution time of the test are critical parameters. However, while compacting the size of binary test sequences has been thoroughly studied over the years, the reduction of the execution time of test programs is still a rather unexplored area of research. This paper describes a family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size. The proposed solutions are based on instruction removal and restoration, which is shown to be computationally more efficient than instruction removal alone. Experimental results demonstrate the compaction capabilities, and allow analyzing computational costs and effectiveness of the different algorithms

    A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport

    Get PDF
    In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set

    Static Compaction of Test Sequences for Synchronous Sequential Circuits

    Get PDF
    Today, VLSI design has progressed to a stage where it needs to incorporate methods of testing circuits. The Automatic Test Pattern Generation (ATPG) is a very attractive method and feasible on almost any combinational and sequential circuit. Currently available automatic test pattern generators (ATPGs) generate test sets that may be excessively long. Because a cost of testing depends on the test length. compaction techniques have been used to reduce that length. The motivation for studying test compaction is twofold. Firstly, by reducing the test sequence length. the memory requirements during the test application and the test application time are reduced. Secondly, the extent of test compaction possible for deterministic test sequences indicates that test pattern generators spend a significant amount of time generating test vectors that are not necessary. The compacted test sequences provide a target for more efficient deterministic test generators. Two types of compaction techniques exist: dynamic and static. The dynamic test sequence compaction performs compaction concurrently with the test generation process and often requires modification of the test generator. The static test sequence compaction is done in a post-processing step to the test generation and is independent of the test generation algorithm and process. In the thesis, a new idea for static compaction of test sequences for synchronous sequential circuits has been proposed. Our new method - SUSEM (Set Up Sequence Elimination Method) uses the circuit state information to eliminate some setup sequences for the target faults and consequently reduce the test sequence length. The technique has been used for the test sequences generated by HITEC test generator. ISCAS89 benchmark circuits were used in our experiments, for some circuits which have a large number of target faults and relatively small number of flip-flops, the very significant compactions have been obtained. The more important is that this method can be used to improve the test generation procedure unlike most static compaction methods which blindly or randomly remove parts of test vectors and cannot be used to improve the test generators

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    A Lightweight N-Cover Algorithm For Diagnostic Fail Data Minimization

    Get PDF
    The increasing design complexity of modern ICs has made it extremely difficult and expensive to test them comprehensively. As the transistor count and density of circuits increase, a large volume of fail data is collected by the tester for a single failing IC. The diagnosis procedure analyzes this fail data to give valuable information about the possible defects that may have caused the circuit to fail. However, without any feedback from the diagnosis procedure, the tester may often collect fail data which is potentially not useful for identifying the defects in the failing circuit. This not only consumes tester memory but also increases tester data logging time and diagnosis run time. In this work, we present an algorithm to minimize the amount of fail data used for high quality diagnosis of the failing ICs. The developed algorithm analyzes outputs at which the tests failed and determines which failing tests can be eliminated from the fail data without compromising diagnosis accuracy. The proposed algorithm is used as a preprocessing step between the tester data logs and the diagnosis procedure. The performance of the algorithm was evaluated using fail data from industry manufactured ICs. Experiments demonstrate that on average, 43% of fail data was eliminated by our algorithm while maintaining an average diagnosis accuracy of 93%. With this reduction in fail data, the diagnosis speed was also increased by 46%

    Test Cost Reduction for Logic Circuits——Reduction of Test Data Volume and Test Application Time——

    Get PDF
    論理回路の大規模化とともに,テストコストの増大が深刻な問題となっている.特に大規模な論理回路では,テストデータ量やテスト実行時間の削減が,テストコスト削減の重要な課題である.本論文では,高い故障検出率のテストパターンをできるだけ少ないテストベクトル数で実現するためのテストコンパクション技術,付加ハードウェアによるテストデータの展開・伸長を前提に圧縮を行うテストコンプレッション技術,及び,スキャン設計回路におけるテスト実行時間削減技術について概説する

    Test Vector Decomposition Based Static Compaction Algorithms for Combinational Circuits

    Get PDF
    Testing system-on-chips involves applying huge amounts of test data, which is stored in the tester memory and then transferred to the chip under test during test application. Therefore, practical techniques, such as test compression and compaction, are required to reduce the amount of test data in order to reduce both the total testing time and memory requirements for the tester. In this paper, a new approach to static compaction for combinational circuits, referred to as test vector decomposition (TVD), is proposed. In addition, two new TVD based static compaction algorithms are presented. Experimental results for benchmark circuits demonstrate the effectiveness of the two new static compaction algorithms

    Custom Integrated Circuits

    Get PDF
    Contains reports on ten research projects.Analog Devices, Inc.IBM CorporationNational Science Foundation/Defense Advanced Research Projects Agency Grant MIP 88-14612Analog Devices Career Development Assistant ProfessorshipU.S. Navy - Office of Naval Research Contract N0014-87-K-0825AT&TDigital Equipment CorporationNational Science Foundation Grant MIP 88-5876
    corecore