19 research outputs found

    Scalable Approach for Power Droop Reduction During Scan-Based Logic BIST

    Get PDF
    The generation of significant power droop (PD) during at-speed test performed by Logic Built-In Self Test (LBIST) is a serious concern for modern ICs. In fact, the PD originated during test may delay signal transitions of the circuit under test (CUT): an effect that may be erroneously recognized as delay faults, with consequent erroneous generation of test fails and increase in yield loss. In this paper, we propose a novel scalable approach to reduce the PD during at-speed test of sequential circuits with scan-based LBIST using the launch-on-capture scheme. This is achieved by reducing the activity factor of the CUT, by proper modification of the test vectors generated by the LBIST of sequential ICs. Our scalable solution allows us to reduce PD to a value similar to that occurring during the CUT in field operation, without increasing the number of test vectors required to achieve a target fault coverage (FC). We present a hardware implementation of our approach that requires limited area overhead. Finally, we show that, compared with recent alternative solutions providing a similar PD reduction, our approach enables a significant reduction of the number of test vectors (by more than 50%), thus the test time, to achieve a target FC

    Low-Cost and High-Reduction Approaches for Power Droop during Launch-On-Shift Scan-Based Logic BIST

    Get PDF
    During at-speed test of high performance sequential ICs using scan-based Logic BIST, the IC activity factor (AF) induced by the applied test vectors is significantly higher than that experienced during its in field operation. Consequently, power droop (PD) may take place during both shift and capture phases, which will slow down the circuit under test (CUT) signal transitions. At capture, this phenomenon is likely to be erroneously recognized as due to delay faults. As a result, a false test fail may be generated, with consequent increase in yield loss. In this paper, we propose two approaches to reduce the PD generated at capture during at-speed test of sequential circuits with scan-based Logic BIST using the Launch-On-Shift scheme. Both approaches increase the correlation between adjacent bits of the scan chains with respect to conventional scan-based LBIST. This way, the AF of the scan chains at capture is reduced. Consequently, the AF of the CUT at capture, thus the PD at capture, is also reduced compared to conventional scan-based LBIST. The former approach, hereinafter referred to as Low-Cost Approach (LCA), enables a 50% reduction in the worst case magnitude of PD during conventional logic BIST. It requires a small cost in terms of area overhead (of approximately 1.5% on average), and it does not increase the number of test vectors over the conventional scan-based LBIST to achieve the same Fault Coverage (FC). Moreover, compared to three recent alternative solutions, LCA features a comparable AF in the scan chains at capture, while requiring lower test time and area overhead. The second approach, hereinafter referred to as High-Reduction Approach (HRA), enables scalable PD reductions at capture of up to 87%, with limited additional costs in terms of area overhead and number of required test vectors for a given target FC, over our LCA approach. Particularly, compared to two of the three recent alternative solutions mentioned above, HRA enables a significantly lower AF in the scan chains during the application of test vectors, while requiring either a comparable area overhead or a significantly lower test time. Compared to the remaining alternative solutions mentioned above, HRA enables a similar AF in the scan chains at capture (approximately 90% lower than conventional scan-based LBIST), while requiring a significantly lower test time (approximately 4.87 times on average lower number of test vectors) and comparable area overhead (of approximately 1.9% on average)

    Power Droop Reduction In Logic BIST By Scan Chain Reordering

    Get PDF
    Significant peak power (PP), thus power droop (PD), during test is a serious concern for modern, complex ICs. In fact, the PD originated during the application of test vectors may produce a delay effect on the circuit under test signal transitions. This event may be erroneously recognized as presence of a delay fault, with consequent generation of an erroneous test fail, thus increasing yield loss. Several solutions have been proposed in the literature to reduce the PD during test of combinational ICs, while fewer approaches exist for sequential ICs. In this paper, we propose a novel approach to reduce peak power/power droop during test of sequential circuits with scan-based Logic BIST. In particular, our approach reduces the switching activity of the scan chains between following capture cycles. This is achieved by an original generation and arrangement of test vectors. The proposed approach presents a very low impact on fault coverage and test time

    A survey of scan-capture power reduction techniques

    Get PDF
    With the advent of sub-nanometer geometries, integrated circuits (ICs) are required to be checked for newer defects. While scan-based architectures help detect these defects using newer fault models, test data inflation happens, increasing test time and test cost. An automatic test pattern generator (ATPG) exercise’s multiple fault sites simultaneously to reduce test data which causes elevated switching activity during the capture cycle. The switching activity results in an IR drop exceeding the devices under test (DUT) specification. An increase in IR-drop leads to failure of the patterns and may cause good DUTs to fail the test. The problem is severe during at-speed scan testing, which uses a functional rated clock with a high frequency for the capture operation. Researchers have proposed several techniques to reduce capture power. They used various methods, including the reduction of switching activity. This paper reviews the recently proposed techniques. The principle, algorithm, and architecture used in them are discussed, along with key advantages and limitations. In addition, it provides a classification of the techniques based on the method used and its application. The goal is to present a survey of the techniques and prepare a platform for future development in capture power reduction during scan testing

    Test and Diagnosis of Integrated Circuits

    Get PDF
    The ever-increasing growth of the semiconductor market results in an increasing complexity of digital circuits. Smaller, faster, cheaper and low-power consumption are the main challenges in semiconductor industry. The reduction of transistor size and the latest packaging technology (i.e., System-On-a-Chip, System-In-Package, Trough Silicon Via 3D Integrated Circuits) allows the semiconductor industry to satisfy the latest challenges. Although producing such advanced circuits can benefit users, the manufacturing process is becoming finer and denser, making chips more prone to defects.The work presented in the HDR manuscript addresses the challenges of test and diagnosis of integrated circuits. It covers:- Power aware test;- Test of Low Power Devices;- Fault Diagnosis of digital circuits

    Machine learning support for logic diagnosis

    Get PDF

    Pseudofunctional Delay Tests For High Quality Small Delay Defect Testing

    Get PDF
    Testing integrated circuits to verify their operating frequency, known as delay testing, is essential to achieve acceptable product quality. The high cost of functional testing has driven the industry to automatically-generated structural tests, applied by low-cost testers taking advantage of design-for-test (DFT) circuitry on the chip. Traditional at-speed functional testing of digital circuits is increasingly challenged by new defect types and the high cost of functional test development. This research addressed the problems of accurate delay testing in DSM circuits by targeting resistive open and short circuits, while taking into account manufacturing process variation, power dissipation and power supply noise. In this work, we developed a class of structural delay tests in which we extended traditional launch-on-capture delay testing to additional launch and capture cycles. We call these Pseudofunctional Tests (PFT). A test pattern is scanned into the circuit, and then multiple functional clock cycles are applied to it with at-speed launch and capture for the last two cycles. The circuit switching activity over an extended period allows the off-chip power supply noise transient to die down prior to the at-speed launch and capture, achieving better timing correlation with the functional mode of operation. In addition, we also proposed advanced compaction methodologies to compact the generated test patterns into a smaller test set in order to reduce the test application time. We modified our CodGen K longest paths per gate automatic test pattern generator to implement PFT pattern generation. Experimental results show that PFT test generation is practical in terms of test generation time

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Low Cost Power and Supply Noise Estimation and Control in Scan Testing of VLSI Circuits

    Get PDF
    Test power is an important issue in deep submicron semiconductor testing. Too much power supply noise and too much power dissipation can result in excessive temperature rise, both leading to overkill during delay test. Scan-based test has been widely adopted as one of the most commonly used VLSI testing method. The test power during scan testing comprises shift power and capture power. The power consumed in the shift cycle dominates the total power dissipation. It is crucial for IC manufacturing companies to achieve near constant power consumption for a given timing window in order to keep the chip under test (CUT) at a near constant temperature, to make it easy to characterize the circuit behavior and prevent delay test over kill. To achieve constant test power, first, we built a fast and accurate power model, which can estimate the shift power without logic simulation of the circuit. We also proposed an efficient and low power X-bit Filling process, which could potentially reduce both the shift power and capture power. Then, we introduced an efficient test pattern reordering algorithm, which achieves near constant power between groups of patterns. The number of patterns in a group is determined by the thermal constant of the chip. Experimental results show that our proposed power model has very good correlation. Our proposed X-Fill process achieved both minimum shift power and capture power. The algorithm supports multiple scan chains and can achieve constant power within different regions of the chip. The greedy test pattern reordering algorithm can reduce the power variation from 29-126 percent to 8-10 percent or even lower if we reduce the power variance threshold. Excessive noise can significantly affect the timing performance of Deep Sub-Micron (DSM) designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This can result in delay test overkill. Prior approaches to power supply noise aware delay test compaction are too costly due to many logic simulations, and are limited to static compaction. We proposed a realistic low cost delay test compaction flow that guardbands the delay using a sequence of estimation metrics to keep the circuit under test supply noise more like functional mode. This flow has been implemented in both static compaction and dynamic compaction. We analyzed the relationship between delay and voltage drop, and the relationship between effective weighted switching activity (WSA) and voltage drop. Based on these correlations, we introduce the low cost delay test pattern compaction framework considering power supply noise. Experimental results on ISCAS89 circuits show that our low cost framework is up to ten times faster than the prior high cost framework. Simulation results also verify that the low cost model can correctly guardband every path‟s extra noise-induced delay. We discussed the rules to set different constraints in the levelized framework. The veto process used in the compaction can be also applied to other constraints, such as power and temperature

    Reliable Design of Three-Dimensional Integrated Circuits

    Get PDF
    corecore