1,086 research outputs found

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    A survey of scan-capture power reduction techniques

    Get PDF
    With the advent of sub-nanometer geometries, integrated circuits (ICs) are required to be checked for newer defects. While scan-based architectures help detect these defects using newer fault models, test data inflation happens, increasing test time and test cost. An automatic test pattern generator (ATPG) exercise’s multiple fault sites simultaneously to reduce test data which causes elevated switching activity during the capture cycle. The switching activity results in an IR drop exceeding the devices under test (DUT) specification. An increase in IR-drop leads to failure of the patterns and may cause good DUTs to fail the test. The problem is severe during at-speed scan testing, which uses a functional rated clock with a high frequency for the capture operation. Researchers have proposed several techniques to reduce capture power. They used various methods, including the reduction of switching activity. This paper reviews the recently proposed techniques. The principle, algorithm, and architecture used in them are discussed, along with key advantages and limitations. In addition, it provides a classification of the techniques based on the method used and its application. The goal is to present a survey of the techniques and prepare a platform for future development in capture power reduction during scan testing

    ROBDD based path delay fault testable combinational circuit synthesis

    Get PDF

    Analysis of Hardware Descriptions

    Get PDF
    The design process for integrated circuits requires a lot of analysis of circuit descriptions. An important class of analyses determines how easy it will be to determine if a physical component suffers from any manufacturing errors. As circuit complexities grow rapidly, the problem of testing circuits also becomes increasingly difficult. This thesis explores the potential for analysing a recent high level hardware description language called Ruby. In particular, we are interested in performing testability analyses of Ruby circuit descriptions. Ruby is ammenable to algebraic manipulation, so we have sought transformations that improve testability while preserving behaviour. The analysis of Ruby descriptions is performed by adapting a technique called abstract interpretation. This has been used successfully to analyse functional programs. This technique is most applicable where the analysis to be captured operates over structures isomorphic to the structure of the circuit. Many digital systems analysis tools require the circuit description to be given in some special form. This can lead to inconsistency between representations, and involves additional work converting between representations. We propose using the original description medium, in this case Ruby, for performing analyses. A related technique, called non-standard interpretation, is shown to be very useful for capturing many circuit analyses. An implementation of a system that performs non-standard interpretation forms the central part of the work. This allows Ruby descriptions to be analysed using alternative interpretations such test pattern generation and circuit layout interpretations. This system follows a similar approach to Boute's system semantics work and O'Donnell's work on Hydra. However, we have allowed a larger class of interpretations to be captured and offer a richer description language. The implementation presented here is constructed to allow a large degree of code sharing between different analyses. Several analyses have been implemented including simulation, test pattern generation and circuit layout. Non-standard interpretation provides a good framework for implementing these analyses. A general model for making non-standard interpretations is presented. Combining forms that combine two interpretations to produce a new interpretation are also introduced. This allows complex circuit analyses to be decomposed in a modular manner into smaller circuit analyses which can be built independently

    Study of Single Event Transient Error Mitigation

    Get PDF
    Single Event Transient (SET) errors in ground-level electronic devices are a growing concern in the radiation hardening field. However, effective SET mitigation technologies which satisfy ground-level demands such as generic, flexible, efficient, and fast, are limited. The classic Triple Modular Redundancy (TMR) method is the most well-known and popular technique in space and nuclear environment. But it leads to more than 200% area and power overheads, which is too costly to implement in ground-level applications. Meanwhile, the coding technique is extensively utilized to inhibit upset errors in storage cells, but the irregularity of combinatorial logics limits its use in SET mitigation. Therefore, SET mitigation techniques suitable for ground-level applications need to be addressed. Aware of the demands for SET mitigation techniques in ground-level applications, this thesis proposes two novel approaches based on the redundant wire and approximate logic techniques. The Redundant Wire is a SET mitigation technique. By selectively adding redundant wire connections, the technique can prohibit targeted transient faults from propagating on the fly. This thesis proposes a set of signature-based evaluation equations to efficiently estimate the protecting effect provided by each redundant wire candidates. Based on the estimated results, a greedy algorithm is used to insert the best candidate repeatedly. Simulation results substantiate that the evaluation equations can achieve up to 98% accuracy on average. Regarding protecting effects, the technique can mask 18.4% of the faults with a 4.3% area, 4.4% power, and 5.4% delay overhead on average. Overall, the quality of protecting results obtained are 2.8 times better than the previous work. Additionally, the impact of synthesis constraints and signature length are discussed. Approximate Logic is a partial TMR technique offering a trade-off between fault coverage and area overheads. The approximate logic consists of an under-approximate logic and an over-approximate logic. The under-approximate logic is a subset of the original min-terms and the over-approximate logic is a subset of the original max-terms. This thesis proposes a new algorithm for generating the two approximate logics. Through the generating process, the algorithm considers the intrinsic failure probabilities of each gate and utilizes a confidence interval estimate equation to minimize required computations. The technique is applied to two fault models, Stuck-at and SET, and the separate results are compared and discussed. The results show that the technique can reduce the error 75% with an area penalty of 46% on some circuits. The delay overheads of this technique are always two additional layers of logic. The two proposed SET mitigation techniques are both applicable to generic combinatorial logics and with high flexibility. The simulation shows promising SET mitigation ability. The proposed mitigation techniques provide designers more choices in developing reliable combinatorial logic in ground-level applications

    Pseudo-functional testing: bridging the gap between manufacturing test and functional operation.

    Get PDF
    Yuan, Feng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 60-65).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Manufacturing Test --- p.1Chapter 1.1.1 --- Functional Testing vs. Structural Testing --- p.2Chapter 1.1.2 --- Fault Model --- p.3Chapter 1.1.3 --- Automatic Test Pattern Generation --- p.4Chapter 1.1.4 --- Design for Testability --- p.6Chapter 1.2 --- Pseudo-Functional Manufacturing Test --- p.13Chapter 1.3 --- Thesis Motivation and Organization --- p.16Chapter 2 --- On Systematic Illegal State Identification --- p.19Chapter 2.1 --- Introduction --- p.19Chapter 2.2 --- Preliminaries and Motivation --- p.20Chapter 2.3 --- What is the Root Cause of Illegal States? --- p.22Chapter 2.4 --- Illegal State Identification Flow --- p.26Chapter 2.5 --- Justification Scheme Construction --- p.30Chapter 2.6 --- Experimental Results --- p.34Chapter 2.7 --- Conclusion --- p.35Chapter 3 --- Compression-Aware Pseudo-Functional Testing --- p.36Chapter 3.1 --- Introduction --- p.36Chapter 3.2 --- Motivation --- p.38Chapter 3.3 --- Proposed Methodology --- p.40Chapter 3.4 --- Pattern Generation in Compression-Aware Pseudo-Functional Testing --- p.42Chapter 3.4.1 --- Circuit Pre-Processing --- p.42Chapter 3.4.2 --- Pseudo-Functional Random Pattern Generation with Multi-Launch Cycles --- p.43Chapter 3.4.3 --- Compressible Test Pattern Generation for Pseudo-Functional Testing --- p.45Chapter 3.5 --- Experimental Results --- p.52Chapter 3.5.1 --- Experimental Setup --- p.52Chapter 3.5.2 --- Results and Discussion --- p.54Chapter 3.6 --- Conclusion --- p.56Chapter 4 --- Conclusion and Future Work --- p.58Bibliography --- p.6

    Power Droop Reduction In Logic BIST By Scan Chain Reordering

    Get PDF
    Significant peak power (PP), thus power droop (PD), during test is a serious concern for modern, complex ICs. In fact, the PD originated during the application of test vectors may produce a delay effect on the circuit under test signal transitions. This event may be erroneously recognized as presence of a delay fault, with consequent generation of an erroneous test fail, thus increasing yield loss. Several solutions have been proposed in the literature to reduce the PD during test of combinational ICs, while fewer approaches exist for sequential ICs. In this paper, we propose a novel approach to reduce peak power/power droop during test of sequential circuits with scan-based Logic BIST. In particular, our approach reduces the switching activity of the scan chains between following capture cycles. This is achieved by an original generation and arrangement of test vectors. The proposed approach presents a very low impact on fault coverage and test time

    Optimal Don’t Care Filling for Minimizing Peak Toggles During At-Speed Stuck-At Testing

    Get PDF
    Due to the increase in manufacturing/environmental uncertainties in the nanometer regime, testing digital chips under different operating conditions becomes mandatory. Traditionally, stuck-at tests were applied at slow speed to detect structural defects and transition fault tests were applied at-speed to detect delay defects. Recently, it was shown that certain cell-internal defects can only be detected using at-speed stuck-at testing. Stuck-at test patterns are power hungry, thereby causing excessive voltage droop on the power grid, delaying the test response, and finally leading to false delay failures on the tester. This motivates the need for peak power minimization during at-speed stuck-at testing. In this article, we use input toggle minimization as a means to minimize a circuit’s power dissipation during at-speed stuck-at testing under the Combinational State Preservation scan (CSP-scan) Design-For-Testability (DFT) scheme. For circuits whose test sets are dominated by don’t cares, this article maps the problem of optimal X-filling for peak input toggle minimization to a variant of the interval coloring problem and proposes a Dynamic Programming (DP) algorithm (DP-fill) for the same along with a theoretical proof for its optimality. For circuits whose test sets are not dominated by don’t cares, we propose a max scatter Hamiltonian path algorithm, which ensures that the ordering is done such that the don’t cares are evenly distributed in the final ordering of test cubes, thereby leading to better input toggle savings than DP-fill. The proposed algorithms, when experimented on ITC99 benchmarks, produced peak power savings of up to 48% over the best-known algorithms in literature. We have also pruned the solutions thus obtained using Greedy and Simulated Annealing strategies with iterative 1-bit neighborhood to validate our idea of optimal input toggle minimization as an effective technique for minimizing peak power dissipation during at-speed stuck-at testing

    From Simulation to Runtime Verification and Back: Connecting Single-Run Verification Techniques

    Get PDF
    Modern safety-critical systems, such as aircraft and spacecraft, crucially depend on rigorous verification, from design time to runtime. Simulation is a highly-developed, time-honored design-time verification technique, whereas runtime verification is a much younger outgrowth from modern complex systems that both enable embedding analysis on-board and require mission-time verification, e.g., for flight certification. While the attributes of simulation are well-defined, the vocabulary of runtime verification is still being formed; both are active research areas needed to ensure safety and security. This invited paper explores the connections and differences between simulation and runtime verification and poses open research questions regarding how each might be used to advance past bottlenecks in the other. We unify their vocabulary, list their commonalities and contrasts, and examine how their artifacts may be connected to push the state of the art of what we can (safely) fly
    corecore