13 research outputs found

    Binning for IC Quality: Experimental Studies on the SEMATECH Data

    Get PDF
    The earlier smaller bipolar study did not provide a high enough bin 0 population to directly observe test escapes and thereby estimate defect levels for the best bin. Results presented here indicate that the best bin can be reasonably expected to show a 2 - 5 factor improvement in defect levels over the average for the lot for moderate to high yields (the overall yield for these experiments was approximately 65%). The experiments also confirm the dependence of the best bin quality on test transparency. The defect level improvement is poorer for the case Of IDDQ escapes where the tests applied had a much higher escape rate. Overall experimental results are consistent with analytical projections for typical values of the clustering parameter in [9]. The final version of this paper will include extensive analysis to validate the analytical models based on this data

    A low-speed BIST framework for high-performance circuit testing

    Get PDF
    Testing of high performance integrated circuits is becoming increasingly a challenging task owing to high clock frequencies. Often testers are not able to test such devices due to their limited high frequency capabilities. In this article we outline a design-for-test methodology such that high performance devices can be tested on relatively low performance testers. In addition, a BIST framework is discussed based on this methodology. Various implementation aspects of this technique are also addresse

    Integrated circuit outlier identification by multiple parameter correlation

    Get PDF
    Semiconductor manufacturers must ensure that chips conform to their specifications before they are shipped to customers. This is achieved by testing various parameters of a chip to determine whether it is defective or not. Separating defective chips from fault-free ones is relatively straightforward for functional or other Boolean tests that produce a go/no-go type of result. However, making this distinction is extremely challenging for parametric tests. Owing to continuous distributions of parameters, any pass/fail threshold results in yield loss and/or test escapes. The continuous advances in process technology, increased process variations and inaccurate fault models all make this even worse. The pass/fail thresholds for such tests are usually set using prior experience or by a combination of visual inspection and engineering judgment. Many chips have parameters that exceed certain thresholds but pass Boolean tests. Owing to the imperfect nature of tests, to determine whether these chips (called "outliers") are indeed defective is nontrivial. To avoid wasted investment in packaging or further testing it is important to screen defective chips early in a test flow. Moreover, if seemingly strange behavior of outlier chips can be explained with the help of certain process parameters or by correlating additional test data, such chips can be retained in the test flow before they are proved to be fatally flawed. In this research, we investigate several methods to identify true outliers (defective chips, or chips that lead to functional failure) from apparent outliers (seemingly defective, but fault-free chips). The outlier identification methods in this research primarily rely on wafer-level spatial correlation, but also use additional test parameters. These methods are evaluated and validated using industrial test data. The potential of these methods to reduce burn-in is discussed

    A Low Speed BIST Framework for High Speed Circuit Testing

    Get PDF
    Testing of high performance integrated circuits is becoming increasingly a challenging task owing to high clock frequencies. Often testers are not able to test such devices due to their limited high frequency capabilities. In this article we outline a design-for-test methodology such that high performance devices can be tested on relatively low performance testers. In addition, a BIST framework is discussed based on this methodology. Various implementation aspects of this technique are also addresse

    SETEC/Semiconductor Manufacturing Technologies Program: 1999 Annual and Final Report

    Full text link

    An analysis of revenue and product planning for automatic test equipment manufacturers

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 71).by Paul D. DeCosta.S.M

    Prognostics and Health Management of Electronics by Utilizing Environmental and Usage Loads

    Get PDF
    Prognostics and health management (PHM) is a method that permits the reliability of a system to be evaluated in its actual application conditions. Thus by determining the advent of failure, procedures can be developed to mitigate, manage and maintain the system. Since, electronic systems control most systems today and their reliability is usually critical for system reliability, PHM techniques are needed for electronics. To enable prognostics, a methodology was developed to extract load-parameters required for damage assessment from irregular time-load data. As a part of the methodology an algorithm that extracts cyclic range and means, ramp-rates, dwell-times, dwell-loads and correlation between load parameters was developed. The algorithm enables significant reduction of the time-load data without compromising features that are essential for damage estimation. The load-parameters are stored in bins with a-priori calculated (optimal) bin-width. The binned data is then used with Gaussian kernel function for density estimation of the load-parameter for use in damage assessment and prognostics. The method was shown to accurately extract the desired load-parameters and enable condensed storage of load histories, thus improving resource efficiency of the sensor nodes. An approach was developed to assess the impact of uncertainties in measurement, model-input, and damage-models on prognostics. The approach utilizes sensitivity analysis to identify the dominant input variables that influence the model-output, and uses the distribution of measured load-parameters and input variables in a Monte-Carlo simulation to provide a distribution of accumulated damage. Using regression analysis of the accumulated damage distributions, the remaining life is then predicted with confidence intervals. The proposed method was demonstrated using an experimental setup for predicting interconnect failures on electronic board subjected to field conditions. A failure precursor based approach was developed for remaining life prognostics by analyzing resistance data in conjunction with usage temperature loads. Using the data from the PHM experiment, a model was developed to estimate the resistance based on measured temperature values. The difference between actual and estimated resistance value in time-domain were analyzed to predict the onset and progress of interconnect degradation. Remaining life was predicted by trending several features including mean-peaks, kurtosis, and 95% cumulative-values of the resistance-drift distributions

    Self-healing concepts involving fine-grained redundancy for electronic systems

    Get PDF
    The start of the digital revolution came through the metal-oxide-semiconductor field-effect transistor (MOSFET) in 1959 followed by massive integration onto a silicon die by means of constant down scaling of individual components. Digital systems for certain applications require fault-tolerance against faults caused by temporary or permanent influence. The most widely used technique is triple module redundancy (TMR) in conjunction with a majority voter, which is regarded as a passive fault mitigation strategy. Design by functional resilience has been applied to circuit structures for increased fault-tolerance and towards self-diagnostic triggered self-healing. The focus of this thesis is therefore to develop new design strategies for fault detection and mitigation within transistor, gate and cell design levels. The research described in this thesis makes three contributions. The first contribution is based on adding fine-grained transistor level redundancy to logic gates in order to accomplish stuck-at fault-tolerance. The objective is to realise maximum fault-masking for a logic gate with minimal added redundant transistors. In the case of non-maskable stuck-at faults, the gate structure generates an intrinsic indication signal that is suitable for autonomous self-healing functions. As a result, logic circuitry utilising this design is now able to differentiate between gate faults and faults occurring in inter-gate connections. This distinction between fault-types can then be used for triggering selective self-healing responses. The second contribution is a logic matrix element which applies the three core redundancy concepts of spatial- temporal- and data-redundancy. This logic structure is composed of quad-modular redundant structures and is capable of selective fault-masking and localisation depending of fault-type at the cell level, which is referred to as a spatiotemporal quadded logic cell (QLC) structure. This QLC structure has the capability of cellular self-healing. Through the combination of fault-tolerant and masking logic features the QLC is designed with a fault-behaviour that is equal to existing quadded logic designs using only 33.3% of the equivalent transistor resources. The inherent self-diagnosing feature of QLC is capable of identifying individual faulty cells and can trigger self-healing features. The final contribution is focused on the conversion of finite state machines (FSM) into memory to achieve better state transition timing, minimal memory utilisation and fault protection compared to common FSM designs. A novel implementation based on content-addressable type memory (CAM) is used to achieve this. The FSM is further enhanced by creating the design out of logic gates of the first contribution by achieving stuck-at fault resilience. Applying cross-data parity checking, the FSM becomes equipped with single bit fault detection and correction

    Fault simulation and test generation for small delay faults

    Get PDF
    Delay faults are an increasingly important test challenge. Traditional delay fault models are incomplete in that they model only a subset of delay defect behaviors. To solve this problem, a more realistic delay fault model has been developed which models delay faults caused by the combination of spot defects and parametric process variation. According to the new model, a realistic delay fault coverage metric has been developed. Traditional path delay fault coverage metrics result in unrealistically low fault coverage, and the real test quality is not reflected. The new metric uses a statistical approach and the simulation based fault coverage is consistent with silicon data. Fast simulation algorithms are also included in this dissertation. The new metric suggests that testing the K longest paths per gate (KLPG) has high detection probability for small delay faults under process variation. In this dissertation, a novel automatic test pattern generation (ATPG) methodology to find the K longest testable paths through each gate for both combinational and sequential circuits is presented. Many techniques are used to reduce search space and CPU time significantly. Experimental results show that this methodology is efficient and able to handle circuits with an exponential number of paths, such as ISCAS85 benchmark circuit c6288. The ATPG methodology has been implemented on industrial designs. Speed binning has been done on many devices and silicon data has shown significant benefit of the KLPG test, compared to several traditional delay test approaches

    Solid State Circuits Technologies

    Get PDF
    The evolution of solid-state circuit technology has a long history within a relatively short period of time. This technology has lead to the modern information society that connects us and tools, a large market, and many types of products and applications. The solid-state circuit technology continuously evolves via breakthroughs and improvements every year. This book is devoted to review and present novel approaches for some of the main issues involved in this exciting and vigorous technology. The book is composed of 22 chapters, written by authors coming from 30 different institutions located in 12 different countries throughout the Americas, Asia and Europe. Thus, reflecting the wide international contribution to the book. The broad range of subjects presented in the book offers a general overview of the main issues in modern solid-state circuit technology. Furthermore, the book offers an in depth analysis on specific subjects for specialists. We believe the book is of great scientific and educational value for many readers. I am profoundly indebted to the support provided by all of those involved in the work. First and foremost I would like to acknowledge and thank the authors who worked hard and generously agreed to share their results and knowledge. Second I would like to express my gratitude to the Intech team that invited me to edit the book and give me their full support and a fruitful experience while working together to combine this book
    corecore