368 research outputs found

    Modelling and Test Generation for Crosstalk Faults in DSM Chips

    Get PDF
    In the era of deep submicron technology (DSM), many System-on-Chip (SoC) applications require the components to be operating at high clock speeds. With the shrinking feature size and ever increasing clock frequencies, the DSM technology has led to a well-known problem of Signal Integrity (SI) more especially in the connecting layout design. The increasing aspect ratios of metal wires and also the ratio of coupling capacitance over substrate capacitance result in electrical coupling of interconnects which leads to crosstalk problems. In this thesis, first the work carried out to model the crosstalk behaviour between aggressor and victim by considering the distributed RLGC parameters of interconnect and the coupling capacitance and mutual conductance between the two nets is presented. The proposed model also considers the RC linear models of the CMOS drivers and receivers. The behaviour of crosstalk in case of under etching problem has been studied and modelled by distributing and approximating the defect behaviour throughout the nets. Next, the proposed model has also been extended to model the behaviour of crosstalk in case of one victim is influenced by several aggressors by considering all aggressors have similar effect (worst-case) on victim. In all the above cases simulation experiments were also carried out and compared with well-known circuit simulation tool PSPICE. It has been proved that the generated crosstalk model is faster and the results generated are within 10% of error margin compared to latter simulation tool. Because of the accuracy and speed of the proposed model, the model is very useful for both SoC designers and test engineers to analyse the crosstalk behaviour. Each manufactured device needs to be tested thoroughly to ensure the functionality before its delivery. The test pattern generation for crosstalk faults is also necessary to test the corresponding crosstalk faults. In this thesis, the well-known PODEM algorithm for stuck-at faults is extended to generate the test patterns for crosstalk faults between single aggressor and single victim. To apply modified PODEM for crosstalk faults, the transition behaviour has been divided into two logic parts as before transition and after transition. After finding individually required test patterns for before transition and after transition, the generated logic vectors are appended to create transition test patterns for crosstalk faults. The developed algorithm is also applied for a few ISCAS 85 benchmark circuits and the fault coverage is found excellent in most circuits. With the incorporation of proposed algorithm into the ATPG tools, the efficiency of testing will be improved by generating the test patterns for crosstalk faults besides for the conventional stuck-at faults. In generating test patterns for crosstalk faults on single victim due to multiple aggressors, the modified PODEM algorithm is found to be more time consuming. The search capability of Genetic Algorithms in finding the required combination of several input factors for any optimized problem fascinated to apply GA for generating test patterns as generating the test pattern is also similar to finding the required vector out of several input transitions. Initially the GA is applied for generating test patterns for stuck-at faults and compared the results with PODEM algorithm. As the fault coverage is almost similar to the deterministic algorithm PODEM, the GA developed for stuck-at faults is extended to find test patterns for crosstalk faults between single aggressor and single victim. The elitist GA is also applied for a few ISCAS 85 benchmark circuits. Later the algorithm is extended to generate test patterns for worst-case crosstalk faults. It has been proved that elitist GA developed in this thesis is also very useful in generating test patterns for crosstalk faults especially for multiple aggressor and single victim crosstalk faults

    A Survey of Fault-Injection Methodologies for Soft Error Rate Modeling in Systems-on-Chips

    Get PDF
    The development of process technology has increased system performance, but the system failure probability has also significantly increased. It is important to consider the system reliability in addition to the cost, performance, and power consumption. In this paper, we describe the types of faults that occur in a system and where these faults originate. Then, fault-injection techniques, which are used to characterize the fault rate of a system-on-chip (SoC), are investigated to provide a guideline to SoC designers for the realization of resilient SoCs

    Design-for-delay-testability techniques for high-speed digital circuits

    Get PDF
    The importance of delay faults is enhanced by the ever increasing clock rates and decreasing geometry sizes of nowadays' circuits. This thesis focuses on the development of Design-for-Delay-Testability (DfDT) techniques for high-speed circuits and embedded cores. The rising costs of IC testing and in particular the costs of Automatic Test Equipment are major concerns for the semiconductor industry. To reverse the trend of rising testing costs, DfDT is\ud getting more and more important

    Power supply noise in delay testing

    Get PDF
    As technology scales into the Deep Sub-Micron (DSM) regime, circuit designs have become more and more sensitive to power supply noise. Excessive noise can significantly affect the timing performance of DSM designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This will eventually result in delay test overkill. To reduce this overkill, we propose a low-cost pattern-dependent approach to analyze noise-induced delay variation for each delay test pattern applied to the design. Two noise models have been proposed to address array bond and wire bond power supply networks, and they are experimentally validated and compared. Delay model is then applied to calculate path delay under noise. This analysis approach can be integrated into static test compaction or test fill tools to control supply noise level of delay tests. We also propose an algorithm to predict transition count of a circuit, which can be applied to control switching activity during dynamic compaction. Experiments have been performed on ISCAS89 benchmark circuits. Results show that compacted delay test patterns generated by our compaction tool can meet a moderate noise or delay constraint with only a small increase in compacted test set size. Take the benchmark circuit s38417 for example: a 10% delay increase constraint only results in 1.6% increase in compacted test set size in our experiments. In addition, different test fill techniques have a significant impact on path delay. In our work, a test fill tool with supply noise analysis has been developed to compare several test fill techniques, and results show that the test fill strategy significant affect switching activity, power supply noise and delay. For instance, patterns with minimum transition fill produce less noise-induced delay than random fill. Silicon results also show that test patterns filled in different ways can cause as much as 14% delay variation on target paths. In conclusion, we must take noise into consideration when delay test patterns are generated

    A novel low-swing voltage driver design and the analysis of its robustness to the effects of process variation and external disturbances

    Get PDF
    arket forces are continually demanding devices with increased functionality/unit area; these demands have been satisfied through aggressive technology scaling which, unfortunately, has impacted adversely on the global interconnect delay subsequently reducing system performance. Line drivers have been used to mitigate the problems with delay; however, these have a large power consumption. A solution to reducing the power dissipation of the drivers is to use lower supply voltages. However, by adopting a lower power supply voltage, the performance of the line drivers for global interconnects is impaired unless low-swing signalling techniques are implemented. Low-swing signalling techniques can provide high speed signalling with low power consumption and hence can be used to drive global on-chip interconnect. Most of the proposed low-swing signalling schemes are immune to noise as they have a good SNR. However, they tend to have a large penalty in area and complexity as they require additional circuitry such as voltage generators and low-Vth devices. Most of the schemes also incorporate multiple Vdd and reference voltages which increase the overall circuit complexity. A diode-connected driver circuit has the best attributes over other low-swing signalling techniques in terms of low power, low delay, good SNR and low area overhead. By incorporating a diode-connected configuration at the output, it can provide high speed signalling due to its high driving capability. However, this configuration also has its limitations as it has issues with its adaptability to process variations, as well as an issue with leakage currents. To address these limitations, two novel driver schemes have been designed, namely, nLVSD and mLVSD, which, additionally, have improvements in performance and power consumption. Comparisons between the proposed schemes with the existing diode-connected driver circuits (MJ and DDC) showed that the nLVSD and mLVSD drivers have approximately 46% and 50% less delay. The name MJ originates from the driver’s designer called Juan A. Montiel-Nelson, while DDC stands for dynamic diode-connected. In terms of power consumption, the nLVSD and mLVSD drivers also produce 43% and 7% improvement. Additionally, the mLVSD driver scheme is the most robust as its SNR is 14 to 44% higher compared to other diode-connected driver circuits. On the other hand, the nLVSD driver has 6% lower SNR compared to the MJ driver, even though it is 19% more robust than the DDC driver. However, since its SNR is still above 1, its improved performance and reduced power consumption, as well other advantages it has over other diode-connected driver circuits can compensate for this limitation. Regarding the robustness to external disturbances, the proposedmdriver circuits are more robust to crosstalk effects as the nLVSD and mLVSD drivers are approximately 35% and 7% more robust than other diode-connected drivers. Furthermore, the mLVSD driver is 5%, 33% and 47% more tolerant to SEUs compared to the nLVSD, MJ and DDC driver circuits respectively, whilst the MJ and DDC drivers are 26% and 40% less tolerant to SEUs iii compared to the nLVSD circuit. A comparison between the four schemes was also undertaken in the presence of ±3σ process and voltage (PV) variations. The analysis indicated that both proposed driver schemes are more robust than other diode-connected driver schemes, namely, the MJ and DDC driver circuits. The MJ driver scheme deviates approximately 18% and 35% more in delay and power consumption compared to the proposed schemes. The DDC driver has approximately 20% and 57% more variations in delay and power consumption in comparison to the proposed schemes. In order to further improve the robustness of the proposed driver circuits against process variation and environmental disturbances, they were further analysed to identify which process variables had the most impact on circuit delay and power consumption, as well as identifying several design techniques to mitigate problems with environmental disturbances. The most significant process parameters to have impact on circuit delay and power consumption were identified to be Vdd, tox, Vth, s, w and t. The impact of SEUs on the circuit can be reduced by increasing the bias currents whilst design methods such as increasing the interconnect spacing can help improve the circuit robustness against crosstalk. Overall it is considered that the proposed nLVSD and mLVSD circuits advance the state of the art in driver design for on-chip signalling applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Maximizing Crosstalk-Induced Slowdown During Path Delay Test

    Get PDF
    Capacitive crosstalk between adjacent signal wires in integrated circuits may lead to noise or a speedup or slowdown in signal transitions. These in turn may lead to circuit failure or reduced operating speed. This thesis focuses on generating test patterns to induce crosstalk-induced signal delays, in order to determine whether the circuit can still meet its timing specification. A timing-driven test generator is developed to sensitize multiple aligned aggressors coupled to a delay-sensitive victim path to detect the combination of a delay spot defect and crosstalk-induced slowdown. The framework uses parasitic capacitance information, timing windows and crosstalk-induced delay estimates to screen out unaligned or ineffective aggressors coupled to a victim path, speeding up crosstalk pattern generation. In order to induce maximum crosstalk slowdown along a path, aggressors are prioritized based on their potential delay increase and timing alignment. The test generation engine introduces the concept of alignment-driven path sensitization to generate paths from inputs to coupled aggressor nets that meet timing alignment and direction requirements. By using path delay information obtained from circuit preprocessing, preferred paths can be chosen during aggressor path propagation processes. As the test generator sensitizes aggressors in the presence of victim path necessary assignments, the search space is effectively reduced for aggressor path generation. This helps in reducing the test generation time for aligned aggressors. In addition, two new crosstalk-driven dynamic test compaction algorithms are developed to control the increase in test pattern count. The proposed test generation algorithm is applied to ISCAS85 and ISCAS89 benchmark circuits. SPICE simulation results demonstrate the ability of the alignment-driven test generator to increase crosstalk-induced delays along victim paths

    Machine learning support for logic diagnosis

    Get PDF

    On Fault Tolerance Methods for Networks-on-Chip

    Get PDF
    Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levelsSiirretty Doriast
    • 

    corecore