1,554 research outputs found

    Optimizing Test Pattern Generation Using Top-Off ATPG Methodology for Stuck–AT, Transition and Small Delay Defect Faults

    Get PDF
    The ever increasing complexity and size of digital circuits complemented by Deep Sub Micron (DSM) technology trends today pose challenges to the efficient Design For Test (DFT) methodologies. Innovation is required not only in designing the digital circuits, but also in automatic test pattern generation (ATPG) to ensure that the pattern set screens all the targeted faults while still complying with the Automatic Test Equipment (ATE) memory constraints. DSM technology trends push the requirements of ATPG to not only include the conventional static defects but also to include test patterns for dynamic defects. The current industry practices consider test pattern generation for transition faults to screen dynamic defects. It has been observed that just screening for transition faults alone is not sufficient in light of the continuing DSM technology trends. Shrinking technology nodes have pushed DFT engineers to include Small Delay Defect (SDD) test patterns in the production flow. The current industry standard ATPG tools are evolving and SDD ATPG is not the most economical option in terms of both test generation CPU time and pattern volume. New techniques must be explored in order to ensure that a quality test pattern set can be generated which includes patterns for stuck-at, transition and SDD faults, all the while ensuring that the pattern volume remains economical. This thesis explores the use of a “Top-Off” ATPG methodology to generate an optimal test pattern set which can effectively screen the required fault models while containing the pattern volume within a reasonable limit

    A survey of scan-capture power reduction techniques

    Get PDF
    With the advent of sub-nanometer geometries, integrated circuits (ICs) are required to be checked for newer defects. While scan-based architectures help detect these defects using newer fault models, test data inflation happens, increasing test time and test cost. An automatic test pattern generator (ATPG) exercise’s multiple fault sites simultaneously to reduce test data which causes elevated switching activity during the capture cycle. The switching activity results in an IR drop exceeding the devices under test (DUT) specification. An increase in IR-drop leads to failure of the patterns and may cause good DUTs to fail the test. The problem is severe during at-speed scan testing, which uses a functional rated clock with a high frequency for the capture operation. Researchers have proposed several techniques to reduce capture power. They used various methods, including the reduction of switching activity. This paper reviews the recently proposed techniques. The principle, algorithm, and architecture used in them are discussed, along with key advantages and limitations. In addition, it provides a classification of the techniques based on the method used and its application. The goal is to present a survey of the techniques and prepare a platform for future development in capture power reduction during scan testing

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    Efficient Path Delay Test Generation with Boolean Satisfiability

    Get PDF
    This dissertation focuses on improving the accuracy and efficiency of path delay test generation using a Boolean satisfiability (SAT) solver. As part of this research, one of the most commonly used SAT solvers, MiniSat, was integrated into the path delay test generator CodGen. A mixed structural-functional approach was implemented in CodGen where longest paths were detected using the K Longest Path Per Gate (KLPG) algorithm and path justification and dynamic compaction were handled with the SAT solver. Advanced techniques were implemented in CodGen to further speed up the performance of SAT based path delay test generation using the knowledge of the circuit structure. SAT solvers are inherently circuit structure unaware, and significant speedup can be availed if structure information of the circuit is provided to the SAT solver. The advanced techniques explored include: Dynamic SAT Solving (DSS), Circuit Observability Don’t Care (Cir-ODC), SAT based static learning, dynamic learnt clause management and Approximate Observability Don’t Care (ACODC). Both ISCAS 89 and ITC 99 benchmarks as well as industrial circuits were used to demonstrate that the performance of CodGen was significantly improved with MiniSat and the use of circuit structure

    Design of On-Chip Self-Testing Signature Register

    Get PDF
    Over the last few years, scan test has turn out to be too expensive to implement for industry standard designs due to increasing test data volume and test time. The test cost of a chip is mainly governed by the resource utilization of Automatic Test Equipment (ATE). Also, it directly depends upon test time that includes time required to load test program, to apply test vectors and to analyze generated test response of the chip. An issue of test time and data volume is increasingly appealing designers to use on-chip test data compactors, either on input side or output side or both. Such techniques significantly address the former issues but have little hold over increasing number of input-outputs under test mode. Further, test pins on DUT are increasing over the generations. Thus, scan channels on test floor are falling short in number for placement of such ICs. To address issues discussed above, we introduce an on-chip self-testing signature register. It comprises a response compactor and a comparator. The compactor compacts large chunk of response data to a small test signature whereas the comparator compares this test signature with desired one. The overall test result for the design is generated on single output pin. Being no storage of test response is demanded, the considerable reduction in ATE memory can be observed. Also, with only single pin to be monitored for test result, the number of tester channels and compare edges on ATE side significantly reduce at the end of the test. This cuts down maintenance and usage cost of test floor and increases its life time. Furthermore reduction in test pins gives scope for DFT engineers to increase number of scan chains so as to further reduce test time

    A Method to Support Diagnostics of Dynamic Faults in Networks of Interconnections

    Get PDF
    The article is devoted to the method facilitating the diagnostics of dynamic faults in networks of interconnection in systems-on-chips. It shows how to reconstruct the erroneous test response sequence coming from the faulty connection based on the set of signatures obtained as a result of multiple compaction of this sequence in the MISR register with programmable feedback. The Chinese reminder theorem is used for this purpose. The article analyzes in detail the various hardware realizations of the discussed method. The testing time associated with each proposed solution was also estimated. Presented method can be used with any type of test sequence and test pattern generator. It is also easily scalable to any number of nets in the network of interconnections. Moreover, it supports finding a trade-off between area overhead and testing time

    High Quality Test Generation Targeting Power Supply Noise

    Get PDF
    Delay test is an essential structural manufacturing test used to determine the maximal frequency at which a chip can run without incurring any functional failures. The central unsolved challenge is achieving high delay correlation with the functional test, which is dominated by power supply noise (PSN). Differences in PSN between functional and structural tests can lead to differences in chip operating frequencies of 30% or more. Pseudo functional test (PFT), based on a multiple-cycle clocking scheme, has better PSN correlation with functional test compared with traditional two-cycle at-speed test. However, PFT is vulnerable to under-testing when applied to delay test. This work aims to generate high quality PFT patterns, achieving high PSN correlation with functional test. First, a simulation-based don’t-care filling algorithm, Bit-Flip, is proposed to improve the PSN for PFT. It relies on randomly flipping a group of bits in the test pattern to explore the search space and find patterns that stress the circuits with the worst-case, but close to functional PSN. Experimental results on un-compacted patterns show Bit-Flip is able to improve PSN as much as 38.7% compared with the best random fill. Second, techniques are developed to improve the efficiency of Bit-Flip. A set of partial patterns, which sensitize transitions on critical cells, are pre-computed and later used to guide the selection of bits to flip. Combining random and deterministic flipping, we achieve similar PSN control as Bit-Flip but with much less simulation time. Third, we address the problem of automatic test pattern generation for extracting circuit timing sensitivity to power supply noise during post-silicon validation. A layout-aware path selection algorithm selects long paths to fully span the power delivery network. The selected patterns are intelligently filled to bring the PSN to a desired level. These patterns can be used to understand timing sensitivity in post-silicon validation by repeatedly applying the path delay test while sweeping the PSN experienced by the path from low to high. Finally, the impacts of compression on power supply noise control are studied. Illinois Scan and embedded deterministic test (EDT) patterns are generated. Then Bit-Flip is extended to incorporate the compression constraints and applied to compressible patterns. The experimental results show that EDT lowers the maximal PSN by 24.15% and Illinois Scan lowers it by 2.77% on un-compacted patterns

    Low Cost Power and Supply Noise Estimation and Control in Scan Testing of VLSI Circuits

    Get PDF
    Test power is an important issue in deep submicron semiconductor testing. Too much power supply noise and too much power dissipation can result in excessive temperature rise, both leading to overkill during delay test. Scan-based test has been widely adopted as one of the most commonly used VLSI testing method. The test power during scan testing comprises shift power and capture power. The power consumed in the shift cycle dominates the total power dissipation. It is crucial for IC manufacturing companies to achieve near constant power consumption for a given timing window in order to keep the chip under test (CUT) at a near constant temperature, to make it easy to characterize the circuit behavior and prevent delay test over kill. To achieve constant test power, first, we built a fast and accurate power model, which can estimate the shift power without logic simulation of the circuit. We also proposed an efficient and low power X-bit Filling process, which could potentially reduce both the shift power and capture power. Then, we introduced an efficient test pattern reordering algorithm, which achieves near constant power between groups of patterns. The number of patterns in a group is determined by the thermal constant of the chip. Experimental results show that our proposed power model has very good correlation. Our proposed X-Fill process achieved both minimum shift power and capture power. The algorithm supports multiple scan chains and can achieve constant power within different regions of the chip. The greedy test pattern reordering algorithm can reduce the power variation from 29-126 percent to 8-10 percent or even lower if we reduce the power variance threshold. Excessive noise can significantly affect the timing performance of Deep Sub-Micron (DSM) designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This can result in delay test overkill. Prior approaches to power supply noise aware delay test compaction are too costly due to many logic simulations, and are limited to static compaction. We proposed a realistic low cost delay test compaction flow that guardbands the delay using a sequence of estimation metrics to keep the circuit under test supply noise more like functional mode. This flow has been implemented in both static compaction and dynamic compaction. We analyzed the relationship between delay and voltage drop, and the relationship between effective weighted switching activity (WSA) and voltage drop. Based on these correlations, we introduce the low cost delay test pattern compaction framework considering power supply noise. Experimental results on ISCAS89 circuits show that our low cost framework is up to ten times faster than the prior high cost framework. Simulation results also verify that the low cost model can correctly guardband every path‟s extra noise-induced delay. We discussed the rules to set different constraints in the levelized framework. The veto process used in the compaction can be also applied to other constraints, such as power and temperature

    Power supply noise in delay testing

    Get PDF
    As technology scales into the Deep Sub-Micron (DSM) regime, circuit designs have become more and more sensitive to power supply noise. Excessive noise can significantly affect the timing performance of DSM designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This will eventually result in delay test overkill. To reduce this overkill, we propose a low-cost pattern-dependent approach to analyze noise-induced delay variation for each delay test pattern applied to the design. Two noise models have been proposed to address array bond and wire bond power supply networks, and they are experimentally validated and compared. Delay model is then applied to calculate path delay under noise. This analysis approach can be integrated into static test compaction or test fill tools to control supply noise level of delay tests. We also propose an algorithm to predict transition count of a circuit, which can be applied to control switching activity during dynamic compaction. Experiments have been performed on ISCAS89 benchmark circuits. Results show that compacted delay test patterns generated by our compaction tool can meet a moderate noise or delay constraint with only a small increase in compacted test set size. Take the benchmark circuit s38417 for example: a 10% delay increase constraint only results in 1.6% increase in compacted test set size in our experiments. In addition, different test fill techniques have a significant impact on path delay. In our work, a test fill tool with supply noise analysis has been developed to compare several test fill techniques, and results show that the test fill strategy significant affect switching activity, power supply noise and delay. For instance, patterns with minimum transition fill produce less noise-induced delay than random fill. Silicon results also show that test patterns filled in different ways can cause as much as 14% delay variation on target paths. In conclusion, we must take noise into consideration when delay test patterns are generated

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis
    corecore