2,562 research outputs found

    Diagnosis of systematic defects based on design-for-manufacturability guidelines

    Get PDF
    All products in the Very-Large-Scale-Integrated-Circuit (VLSIC) industry go through three major stages of production - Design, Verification and Manufacturing. Unfortunately, neither of these stages are truly perfect, hence we need two more sub-stages of manufacturing, namely Testing and Defect Diagnosis to prevent imperfections in ICs. Testing is used to generate test vectors to validate the functionality of the Device-under-Test (DUT), and Defect Diagnosis is the process of identifying the root-cause of a failing chip, i.e., the location and nature of defect. Systematic defects are unintended structural and material changes at specific locations with a higher probability of failure due to repeating manufacturing imperfections. While Design-For-Manufacturability (DFM) guidelines are not always applied due to limited resources like circuit area and design time, enforcing these guidelines helps in ensuring sufficient product yields by preventing systematic defects. However, even if the DFM guidelines are strictly enforced, systematic defects may still occur as complete information about the process and manufacturing is not available due to reducing available time-to-market for chips. ^ An earlier work used DFM guidelines as a basis for modeling of defects, and diagnostic test generation. Under this framework, a circuit is processed to identify layout locations that violate DFM rules. Next, these coordinates are mapped and translated to faults based on different fault models including stuck-at-faults, bridging faults and transition faults. ^ The goal of this thesis is to perform systematic defect diagnosis and analyze the accuracy of diagnosis under the same DFM framework. Thus, systematic defect candidates are generated from DFM guidelines and the generated faultlist is used to perform diagnosis. Because defects may not always be systematic, a new heuristic to dynamically switch between DFM and non-DFM faultlists has also been implemented. This presents us with the best option to follow to further optimize the accuracy of diagnosis. The results demonstrate that the DFM framework can be used to improve the accuracy of diagnosis with minimal resource requirements

    An efficient logic fault diagnosis framework based on effect-cause approach

    Get PDF
    Fault diagnosis plays an important role in improving the circuit design process and the manufacturing yield. With the increasing number of gates in modern circuits, determining the source of failure in a defective circuit is becoming more and more challenging. In this research, we present an efficient effect-cause diagnosis framework for combinational VLSI circuits. The framework consists of three stages to obtain an accurate and reasonably precise diagnosis. First, an improved critical path tracing algorithm is proposed to identify an initial suspect list by backtracing from faulty primary outputs toward primary inputs. Compared to the traditional critical path tracing approach, our algorithm is faster and exact. Second, a novel probabilistic ranking model is applied to rank the suspects so that the most suspicious one will be ranked at or near the top. Several fast filtering methods are used to prune unrelated suspects. Finally, to refine the diagnosis, fault simulation is performed on the top suspect nets using several common fault models. The difference between the observed faulty behavior and the simulated behavior is used to rank each suspect. Experimental results on ISCAS85 benchmark circuits show that this diagnosis approach is efficient both in terms of memory space and CPU time and the diagnosis results are accurate and reasonably precise

    Logic synthesis and testing techniques for switching nano-crossbar arrays

    Get PDF
    Beyond CMOS, new technologies are emerging to extend electronic systems with features unavailable to silicon-based devices. Emerging technologies provide new logic and interconnection structures for computation, storage and communication that may require new design paradigms, and therefore trigger the development of a new generation of design automation tools. In the last decade, several emerging technologies have been proposed and the time has come for studying new ad-hoc techniques and tools for logic synthesis, physical design and testing. The main goal of this project is developing a complete synthesis and optimization methodology for switching nano-crossbar arrays that leads to the design and construction of an emerging nanocomputer. New models for diode, FET, and four-terminal switch based nanoarrays are developed. The proposed methodology implements logic, arithmetic, and memory elements by considering performance parameters such as area, delay, power dissipation, and reliability. With combination of logic, arithmetic, and memory elements a synchronous state machine (SSM), representation of a computer, is realized. The proposed methodology targets variety of emerging technologies including nanowire/nanotube crossbar arrays, magnetic switch-based structures, and crossbar memories. The results of this project will be a foundation of nano-crossbar based circuit design techniques and greatly contribute to the construction of emerging computers beyond CMOS. The topic of this project can be considered under the research area of â\u80\u9cEmerging Computing Modelsâ\u80\u9d or â\u80\u9cComputational Nanoelectronicsâ\u80\u9d, more specifically the design, modeling, and simulation of new nanoscale switches beyond CMOS

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Machine learning support for logic diagnosis

    Get PDF

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Test and Diagnosis of Integrated Circuits

    Get PDF
    The ever-increasing growth of the semiconductor market results in an increasing complexity of digital circuits. Smaller, faster, cheaper and low-power consumption are the main challenges in semiconductor industry. The reduction of transistor size and the latest packaging technology (i.e., System-On-a-Chip, System-In-Package, Trough Silicon Via 3D Integrated Circuits) allows the semiconductor industry to satisfy the latest challenges. Although producing such advanced circuits can benefit users, the manufacturing process is becoming finer and denser, making chips more prone to defects.The work presented in the HDR manuscript addresses the challenges of test and diagnosis of integrated circuits. It covers:- Power aware test;- Test of Low Power Devices;- Fault Diagnosis of digital circuits
    corecore