19 research outputs found

    Multi-Cycle at Speed Test

    Get PDF
    In this research, we focus on the development of an algorithm that is used to generate a minimal number of patterns for path delay test of integrated circuits using the multi-cycle at-speed test. We test the circuits in functional mode, where multiple functional cycles follow after the test pattern scan-in operation. This approach increases the delay correlation between the scan and functional test, due to more functionally realistic power supply noise. We use multiple at-speed cycles to compact K-longest paths per gate tests, which reduces the number of scan patterns. After a path is generated, we try to place each path in the first pattern in the pattern pool. If the path does not fit due to conflicts, we attempt to place it in later functional cycles. This compaction approach retains the greedy nature of the original dynamic compaction algorithm where it will stop if the path fits into a pattern. If the path is not able to compact in any of the functional cycles of patterns in the pool, we generate a new pattern. In this method, each path delay test is compared to at-speed patterns in the pool. The challenge is that the at-speed delay test in a given at-speed cycle must have its necessary value assignments set up in previous (preamble) cycles, and have the captured results propagated to a scan cell in the later (coda) cycles. For instance, if we consider three at-speed (capture) cycles after the scan-in operation, and if we need to place a fault in the first capture cycle, then we must generate it with two propagation cycles. In this case, we consider these propagation cycles as coda cycles, so the algorithm attempts to select the most observable path through them. Likewise, if we are placing the path test in the second capture cycle, then we need one preamble cycle and one coda cycle, and if we are placing the path test in the third capture cycle, we require two preamble cycles with no coda cycles

    A comprehensive comparison between design for testability techniques for total dose testing of flash-based FPGAs

    Get PDF
    Radiation sources exist in different kinds of environments where electronic devices often operate. Correct device operation is usually affected negatively by radiation. The radiation resultant effect manifests in several forms depending on the operating environment of the device like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). CMOS circuits and Floating gate MOS circuits suffer from an increase in the delay and the leakage current due to TID effect. This may damage the proper operation of the integrated circuit. Exhaustive testing is needed for devices operating in harsh conditions like space and military applications to ensure correct operations in the worst circumstances. The use of worst case test vectors (WCTVs) for testing is strongly recommended by MIL-STD-883, method 1019, which is the standard describing the procedure for testing electronic devices under radiation. However, the difficulty of generating these test vectors hinders their use in radiation testing. Testing digital circuits in the industry is usually done nowadays using design for testability (DFT) techniques as they are very mature and can be relied on. DFT techniques include, but not limited to, ad-hoc technique, built-in self test (BIST), muxed D scan, clocked scan and enhanced scan. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Despite all these recommendations for DFT, radiation testing has not benefited from this reliable technology yet. Also, with the big variation in the DFT techniques, choosing the right technique is the bottleneck to achieve the best results for TID testing. In this thesis, a comprehensive comparison between different DFT techniques for TID testing of flash-based FPGAs is made to help designers choose the best suitable DFT technique depending on their application. The comparison includes muxed D scan technique, clocked scan technique and enhanced scan technique. The comparison is done using ISCAS-89 benchmarks circuits. Points of comparisons include FPGA resources utilization, difficulty of designs bring-up, added delay by DFT logic and robust testable paths in each technique

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    Identifying worst case test vectors for FPGA exposed to total ionization dose using design for testability techniques

    Get PDF
    Electronic devices often operate in harsh environments which contain a variation of radiation sources. Radiation may cause different kinds of damage to proper operation of the devices. Their sources can be found in terrestrial environments, or in extra-terrestrial environments like in space, or in man-made radiation sources like nuclear reactors, biomedical devices and high energy particles physics experiments equipment. Depending on the operation environment of the device, the radiation resultant effect manifests in several forms like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). TID effect causes an increase in the delay and the leakage current of CMOS circuits which may damage the proper operation of the integrated circuit. To ensure proper operation of these devices under radiation, thorough testing must be made especially in critical applications like space and military applications. Although the standard which describes the procedure for testing electronic devices under radiation emphasizes the use of worst case test vectors (WCTVs), they are never used in radiation testing due to the difficulty of generating these vectors for circuits under test. For decades, design for testability (DFT) has been the best choice for test engineers to test digital circuits in industry. It has become a very mature technology that can be relied on. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Surprisingly, however, radiation testing has not yet made use of this reliable technology. In this thesis, a novel methodology is proposed to extend the usage of DFT to generate WCTVs for delay failure in Flash based field programmable gate arrays (FPGAs) exposed to total ionizing dose (TID). The methodology is validated using MicroSemi ProASIC3 FPGA and cobalt 60 facility

    Observability Driven Path Generation for Delay Test

    Get PDF
    This research describes an approach for path generation using an observability metric for delay test. K Longest Path Per Gate (KLPG) tests are generated for sequential circuits. A transition launched from a scan flip-flop (SFF) is captured into another SFF during at-speed clock cycles, that is, clock cycles at the rated design speed. The generated path is a ‘longest path’ suitable for delay test. The path generation algorithm then utilizes observability of the fan-out gates in the consecutive, lower-speed clock cycles, known as coda cycles, to generate paths ending at a SFF, to capture the transition from the at-speed cycles. For a given clocking scheme defined by the number of coda cycles, if the final flip-flop is not scan-enabled, the path generation algorithm attempts to generate a different path that ends at a SFF, located in a different branch of the circuit fan-out, indicated by lower observability. The paths generated over multiple cycles are sequentially justified using Boolean satisfiability. The observability metric optimizes the path generation in the coda cycles by always attempting to grow the path through the branch with the best observability and never generating a path that ends at a non-scan flip-flop. The algorithm has been developed in C++. The experiments have been performed on an Intel Core i7 machine with 64GB RAM. Various ISCAS benchmark circuits have been used with various KLPG configurations for code evaluation. Multiple configurations have been used for the experiments. The combinations of the values of K [1, 2, 3, 4, 5] and number of coda cycles [1, 2, 3] have been used to characterize the implementation. A sublinear rise is run time has been observed with increasing K values. The total number of tested paths rise with K and falls with number of coda cycles, due to the increasing number of constraints on the path, particularly due to the fixed inputs

    Improved Path Recovery in Pseudo Functional Path Delay Test Using Extended Value Algebra

    Get PDF
    Scan-based delay test achieves high fault coverage due to its improved controllability and observability. This is particularly important for our K Longest Paths Per Gate (KLPG) test approach, which has additional necessary assignments on the paths. At the same time, some percentage of the flip-flops in the circuit will not scan, increasing the difficulty in test generation. In particular, there is no direct control on the outputs of those non-scan cells. All the non-scan cells that cannot be initialized are considered “uncontrollable” in the test generation process. They behave like “black boxes” and, thus, may block a potential path propagation, resulting in path delay test coverage loss. It is common for the timing critical paths in a circuit to pass through nodes influenced by the non-scan cells. In our work, we have extended the traditional Boolean algebra by including the “uncontrolled” state as a legal logic state, so that we can improve path coverage. Many path pruning decisions can be taken much earlier and many of the lost paths due to uncontrollable non-scan cells can be recovered, increasing path coverage and potentially reducing average CPU time per path. We have extended the existing traditional algebra to an 11-value algebra: Zero (stable), One (stable), Unknown, Uncontrollable, Rise, Fall, Zero/Uncontrollable, One/Uncontrollable, Unknown/Uncontrollable, Rise/Uncontrollable, and Fall/Uncontrollable. The logic descriptions for the NOT, AND, NAND, OR, NOR, XOR, XNOR, PI, Buff, Mux, TSL, TSH, TSLI, TSHI, TIE1 and TIE0 cells in the ISCAS89 benchmark circuits have been extended to the 11-value truth table. With 10% non-scan flip-flops, improved path delay fault coverage has been observed in comparison to that with the traditional algebra. The greater the number of long paths we want to test; the greater the path recovery advantage we achieve using our algebra. Along with improved path recovery, we have been able to test a greater number of transition fault sites. In most cases, the average CPU time per path is also lower while using the 11-value algebra. The number of tested paths increased by an average of 1.9x for robust tests, and 2.2x for non-robust tests, for K=5 (five longest rising and five longest falling transition paths through each line in the circuit), using the eleven-value algebra in contrast to the traditional algebra. The transition fault coverage increased by an average of 70%. The improvement increased with higher K values. The CPU time using the extended algebra increased by an average of 20%. So the CPU time per path decreased by an average of 40%. In future work, the extended algebra can achieve better test coverage for memory intensive circuits, circuits with logic black boxes, third party IPs, and analog units

    Design of On-Chip Self-Testing Signature Register

    Get PDF
    Over the last few years, scan test has turn out to be too expensive to implement for industry standard designs due to increasing test data volume and test time. The test cost of a chip is mainly governed by the resource utilization of Automatic Test Equipment (ATE). Also, it directly depends upon test time that includes time required to load test program, to apply test vectors and to analyze generated test response of the chip. An issue of test time and data volume is increasingly appealing designers to use on-chip test data compactors, either on input side or output side or both. Such techniques significantly address the former issues but have little hold over increasing number of input-outputs under test mode. Further, test pins on DUT are increasing over the generations. Thus, scan channels on test floor are falling short in number for placement of such ICs. To address issues discussed above, we introduce an on-chip self-testing signature register. It comprises a response compactor and a comparator. The compactor compacts large chunk of response data to a small test signature whereas the comparator compares this test signature with desired one. The overall test result for the design is generated on single output pin. Being no storage of test response is demanded, the considerable reduction in ATE memory can be observed. Also, with only single pin to be monitored for test result, the number of tester channels and compare edges on ATE side significantly reduce at the end of the test. This cuts down maintenance and usage cost of test floor and increases its life time. Furthermore reduction in test pins gives scope for DFT engineers to increase number of scan chains so as to further reduce test time

    Heuristics Based Test Overhead Reduction Techniques in VLSI Circuits

    Get PDF
    The electronic industry has evolved at a mindboggling pace over the last five decades. Moore’s Law [1] has enabled the chip makers to push the limits of the physics to shrink the feature sizes on Silicon (Si) wafers over the years. A constant push for power-performance-area (PPA) optimization has driven the higher transistor density trends. The defect density in advanced process nodes has posed a challenge in achieving sustainable yield. Maintaining a low Defect-per-Million (DPM) target for a product to be viable with stringent Time-to-Market (TTM) has become one of the most important aspects of the chip manufacturing process. Design-for-Test (DFT) plays an instrumental role in enabling low DPM. DFT however impacts the PPA of a chip. This research describes an approach of minimizing the scan test overhead in a chip based on circuit topology heuristics. These heuristics are applied on a full-scan design to convert a subset of the scan flip-flops (SFF) into D flip-flops (DFF). The K Longest Path per Gate (KLPG) [2] automatic test pattern generation (ATPG) algorithm is used to generate tests for robust paths in the circuit. Observability driven multi cycle path generation [3][4] and test are used in this work to minimize coverage loss caused by the SFF conversion process. The presence of memory arrays in a design exacerbates the coverage loss due to the shadow cast by the array on its neighboring logic. A specialized behavioral modeling for the memory array is required to enable test coverage of the shadow logic. This work develops a memory model integrated into the ATPG engine for this purpose. Multiple clock domains pose challenges in the path generation process. The inter-domain clocking relationship and corresponding logic sensitization are modeled in our work to generate synchronous inter-domain paths over multiple clock cycles. Results are demonstrated on ISCAS89 and ITC99 benchmark circuits. Power saving benefit is quantified using an open-source standard-cell library

    Nouvelle technique de test de type délai plus robuste à la variation d'impédance du réseau de distribution d'alimentation

    Get PDF
    De nos jours, le test de balayage à vitesse nominale (SBAST, Scan Based at-Speed Testing) est l’approche de test de type délai la plus dominante. Ce type de test vient avec certains inconvénients, comme le bruit de tension d’alimentation (PSN, Power Supply Noise) produit pendant le mode test, qui diffère de celui induit pendant le mode fonctionnel. Quelques techniques de test de type SBAST ont été développées pour réduire cette chute de tension. Mais un aspect particulier a été négligé dans la littérature, à savoir l’impact de la variation d’impédance du réseau de distribution d’alimentation (PDN, Power Delivery Network) sur les tests de type délai. Ce projet de maîtrise présente une nouvelle technique de test SBAST, nommée (OCAS, One Clock Alternated Shift) pour minimiser l’impact potentiel de la variation d’impédance du réseau de distribution d’alimentation. La stratégie derrière cette nouvelle technique est d’imiter autant que possible le signal d’horloge du mode fonctionnel. Le but de cette imitation est d’obtenir des conditions de distribution d’alimentation similaires à celle du mode fonctionnel pour protéger le circuit en mode test contre les variations de Vdd dues aux variations d’impédance. Comme cas d’étude, nous considérons la variation d’impédance du PDN qui peut se produire avec les circuits intégrés 3D avec la variation du nombre de puces du circuit sous test (CUT, Circuit Under Test). Les résultats des simulations HSPICE montrent que la technique OCAS est moins sensible à une telle variation d’impédance et qu’elle surpasse les principales techniques existantes de SBAST. De plus, les résultats de la couverture des pannes de transition de la technique OCAS obtenue avec les outils (ATPG, Automatic Test Pattern Generation) sont fort acceptables. Cependant, le nombre de vecteurs de test nécessaires pour y parvenir sont plus élevés, en raison des limitations de ces outils
    corecore