4,945 research outputs found

    Variation aware analysis of bridging fault testing

    No full text
    This paper investigates the impact of process variation on test quality with regard to resistive bridging faults. The input logic threshold voltage and gate drive strength parameters are analyzed regarding their process variation induced influence on test quality. The impact of process variation on test quality is studied in terms of test escapes and measured by a robustness metric. It is shown that some bridges are sensitive to process variation in terms of logic behavior, but such variation does not necessarily compromise test quality if the test has high robustness. Experimental results of Monte-Carlo simulation based on recent process variation statistics are presented for ISCAS85 and -89 benchmark circuits, using a 45nm gate library and realistic bridges. The results show that tests generated without consideration of process variation are inadequate in terms of test quality, particularly for small test sets. On the other hand, larger test sets detect more of the logic faults introduced by process variation and have higher test quality

    Layout level design for testability strategy applied to a CMOS cell library

    Get PDF
    The layout level design for testability (LLDFT) rules used here allow to avoid some hard to detect faults or even undetectable faults on a cell library by modifying the cell layout without changing their behavior and achieving a good level of reliability. These rules avoid some open faults or reduce their appearance probability. The main purpose has been to apply that set of LLDFT rules on the cells of the library designed at the Centre Nacional de Microelectronica (CNM) in order to obtain a highly testable cell library. The authors summarize the main results (area overhead and performance degradation) of the application of the LLDFT rules on the cell

    Testability enhancement of a basic set of CMOS cells

    Get PDF
    Testing should be evaluated as the ability of the test patterns to cover realistic faults, and high quality IC products demand high quality testing. We use a test strategy based on physical design for testability (to discover both open and short faults, which are difficult or even impossible to detect). Consequentially, layout level design for testability (LLDFT) rules have been developed, which prevent the faults, or at least reduce the chance of their appearing. The main purpose of this work is to apply a practical set of LLDFT rules to the library cells designed by the Centre Nacional de Microelectrònica (CNM) and obtain a highly testable cell library. The main results of the application of the LLDFT rules (area overheads and performance degradation) are summarized and the results are significant since IC design is highly repetitive; a small effort to improve cell layout can bring about great improvement in design

    Algorithm-Directed Crash Consistence in Non-Volatile Memory for HPC

    Full text link
    Fault tolerance is one of the major design goals for HPC. The emergence of non-volatile memories (NVM) provides a solution to build fault tolerant HPC. Data in NVM-based main memory are not lost when the system crashes because of the non-volatility nature of NVM. However, because of volatile caches, data must be logged and explicitly flushed from caches into NVM to ensure consistence and correctness before crashes, which can cause large runtime overhead. In this paper, we introduce an algorithm-based method to establish crash consistence in NVM for HPC applications. We slightly extend application data structures or sparsely flush cache blocks, which introduce ignorable runtime overhead. Such extension or cache flushing allows us to use algorithm knowledge to \textit{reason} data consistence or correct inconsistent data when the application crashes. We demonstrate the effectiveness of our method for three algorithms, including an iterative solver, dense matrix multiplication, and Monte-Carlo simulation. Based on comprehensive performance evaluation on a variety of test environments, we demonstrate that our approach has very small runtime overhead (at most 8.2\% and less than 3\% in most cases), much smaller than that of traditional checkpoint, while having the same or less recomputation cost.Comment: 12 page

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Test generation for current testing

    Get PDF

    Deterministic and Probabilistic Test Generation for Binary and Ternary Quantum Circuits

    Get PDF
    It is believed that quantum computing will begin to have an impact around year 2010. Much work is done on physical realization and synthesis of quantum circuits, but nothing so far on the problem of generating tests and localization of faults for such circuits. Even fault models for quantum circuits have been not formulated yet. We propose an approach to test generation for a wide category of fault models of single and multiple faults. It uses deterministic and probabilistic tests to detect faults. A Fault Table is created that includes probabilistic information. If possible, deterministic tests are first selected, while covering faults with tests, in order to shorten the total length of the test sequence. The method is applicable to both binary and ternary quantum circuits. The system generates test sequences and adaptive trees for fault localization for small binary and ternary quantum circuits

    FPGA Based Design for Accelerated Fault-testing of Integrated Circuits

    Get PDF
    In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA\u27s reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs
    corecore