8 research outputs found

    Modeling the Impact of Process Variation on Resistive Bridge Defects

    No full text
    Recent research has shown that tests generated without taking process variation into account may lead to loss of test quality. At present there is no efficient device-level modeling technique that models the effect of process variation on resistive bridges. This paper presents a fast and accurate technique to model the effect of process variation on resistive bridge defects. The proposed model is implemented in two stages: firstly, it employs an accurate transistor model (BSIM4) to calculate the critical resistance of a bridge; secondly, the effect of process variation is incorporated in this model by using three transistor parameters: gate length (L), threshold voltage (V) and effective mobility (ueff) where each follow Gaussian distribution. Experiments are conducted on a 65-nm gate library (for illustration purposes), and results show that on average the proposed modeling technique is more than 7 times faster and in the worst case, error in bridge critical resistance is 0.8% when compared with HSPICE

    Variation Analysis, Fault Modeling and Yield Improvement of Emerging Spintronic Memories

    Get PDF

    Fault modelling and accelerated simulation of integrated circuits manufacturing defects under process variation

    No full text
    As silicon manufacturing process scales to and beyond the 65-nm node, process variation can no longer be ignored. The impact of process variation on integrated circuit performance and power has received significant research input. Variation-aware test, on the other hand, is a relatively new research area that is currently receiving attention worldwide.Research has shown that test without considering process variation may lead to loss of test quality. Fault modelling and simulation serve as a backbone of manufacturing test. This thesis is concerned with developing efficient fault modelling techniques and simulation methodologies that take into account the effect of process variation on manufacturing defects with particular emphasis on resistive bridges and resistive opens.The first contribution of this thesis addresses the problem of long computation time required to generate logic fault of resistive bridges under process variation by developing a fast and accurate modelling technique to model logic fault behaviour of resistive bridges.The new technique is implemented by employing two efficient voltage calculation algorithms to calculate the logic threshold voltage of driven gates and critical resistance of a fault-site to enable the computation of bridge logic faults without using SPICE. Simulation results show that the technique is fast (on average 53 times faster) and accurate (worst case is 2.64% error) when compared with HSPICE. The second contribution analyses the complexity of delay fault simulation of resistive bridges to reduce the computation time of delay fault when considering process variation. An accelerated delay fault simulation methodology of resistive bridges is developed by employing a three-step strategy to speed up the calculation of transient gate output voltage which is needed to accurately compute delay faults. Simulation results show that the methodology is on average 17.4 times faster, with 5.2% error in accuracy, when compared with HSPICE. The final contribution presents an accelerated simulation methodology of resistive opens to address the problem of long simulation time of delay fault when considering process variation. The methodology is implemented by using two efficient algorithms to accelerate the computation of transient gate output voltage and timing critical resistance of an open fault-site. Simulation results show that the methodology is on average up to 52 times faster than HSPICE, with 4.2% error in accuracy

    Efficient algorithms for fundamental statistical timing analysis problems in delay test applications of VLSI circuits

    Get PDF
    Tremendous advances in semiconductor process technology are creating new challenges for the delay test of today’s digital VLSI circuits. The complexity of state-of-the-art manufacturing processes does not only lead to greater process variability, it also makes today's integrated circuits more prone to defects such as resistive shorts and opens. As a consequence, some of the manufactured circuits do not meet the timing requirements set by the design specification. These circuits must be identified by delay testing and sorted out to ensure the quality of shipped products. Due to the increasing process variability, key transistor and interconnect parameters must be modelled as random variables. These random variables capture the uncertainty caused by process variability, but also the impact of modelling errors and variations in the operating conditions of the circuits, such as the temperature or the supply voltage. The important consequence for delay testing is that a particular delay test detects a delay fault of fixed size in only a subset of all manufactured circuits, which inevitably leads to the shipment of defective products. Despite the fact that this problem is well understood, today's delay test generation methods are unable to consider the distortion of the delay test results, caused by process variability. To analyse and predict the effectiveness of delay tests in a population of circuits which are functionally identical but have varying timing properties, statistical timing analysis is necessary. Although the large runtime of statistical timing analysis is a well known problem, little progress has been made in the development of efficient statistical timing analysis algorithms for the variability-aware delay test generation and delay fault simulation. This dissertation proposes novel and efficient statistical timing analysis algorithms for the variability-aware delay test generation and delay fault simulation in presence of large delay variations. For the detection of path delay faults, a novel probabilistic sensitization analysis is presented which analyses the impact of process variations on the sensitization of the target paths. Furthermore, an efficient method for approximating the probability of detecting small delay faults is presented. Beyond that, efficient statistical SUM and MAX-operations are proposed, which provide the fundamental basis of block-based statistical timing analysis. The experiment results demonstrate the high efficiency of the proposed algorithms

    Multi-level simulation of nano-electronic digital circuits on GPUs

    Get PDF
    Simulation of circuits and faults is an essential part in design and test validation tasks of contemporary nano-electronic digital integrated CMOS circuits. Shrinking technology processes with smaller feature sizes and strict performance and reliability requirements demand not only detailed validation of the functional properties of a design, but also accurate validation of non-functional aspects including the timing behavior. However, due to the rising complexity of the circuit behavior and the steady growth of the designs with respect to the transistor count, timing-accurate simulation of current designs requires a lot of computational effort which can only be handled by proper abstraction and a high degree of parallelization. This work presents a simulation model for scalable and accurate timing simulation of digital circuits on data-parallel graphics processing unit (GPU) accelerators. By providing compact modeling and data-structures as well as through exploiting multiple dimensions of parallelism, the simulation model enables not only fast and timing-accurate simulation at logic level, but also massively-parallel simulation with switch level accuracy. The model facilitates extensions for fast and efficient fault simulation of small delay faults at logic level, as well as first-order parametric and parasitic faults at switch level. With the parallelization on GPUs, detailed and scalable simulation is enabled that is applicable even to multi-million gate designs. This way, comprehensive analyses of realistic timing-related faults in presence of process- and parameter variations are enabled for the first time. Additional simulation efficiency is achieved by merging the presented methods in a unified simulation model, that allows to combine the unique advantages of the different levels of abstraction in a mixed-abstraction multi-level simulation flow to reach even higher speedups. Experimental results show that the implemented parallel approach achieves unprecedented simulation throughput as well as high speedup compared to conventional timing simulators. The underlying model scales for multi-million gate designs and gives detailed insights into the timing behavior of digital CMOS circuits, thereby enabling large-scale applications to aid even highly complex design and test validation tasks

    Variation-Aware Fault Modeling

    No full text
    To achieve a high product quality for nano-scale systems both realistic defect mechanisms and process variations must be taken into account. While existing approaches for variation-aware digital testing either restrict themselves to special classes of defects or assume given probability distributions to model variabilities, the proposed approach combines defectoriented testing with statistical library characterization. It uses Monte Carlo simulations at electrical level to extract delay distributions of cells in the presence of defects and for the defectfree case. This allows distinguishing the effects of process variations on the cell delay from defect-induced cell delays under process variations. To provide a suitable interface for test algorithms at higher levels of abstraction the distributions are represented as histograms and stored in a histogram data base (HDB). Thus, the computationally expensive defect analysis needs to be performed only once as a preprocessing step for library characterization, and statistical test algorithms do not require any low level information beyond the HDB. The generation of the HDB is demonstrated for primitive cells in 45nm technology

    Massive statistical process variations: A grand challenge for testing nanoelectronic circuits

    No full text
    Increasing parameter variations, high defect densities and a growing susceptibility to external noise in nanoscale technologies have led to a paradigm shift in design. Classical design strategies based on worst-case or average assumptions have been replaced by statistical design, and new robust and variation tolerant architectures have been developed. At the same time testing has become extremely challenging, as parameter variations may lead to an unacceptable behavior or change the impact of defects. Furthermore, for robust designs a precise quality assessment is required particularly showing the remaining robustness in the presence of manufacturing defects. The paper pinpoints the key challenges for testing nanoelectronic circuits in more detail, covering the range of variation-aware fault modeling via methods for statiscal testing and their algorithmic foundations to robustness analysis and quality binning
    corecore