1,428 research outputs found

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Automatic vector generation guided by a functional metric

    Get PDF
    Verification is still the bottleneck of the complex digital system design process. Formal techniques have advanced in their capacity to handle more complex descriptions, but they still suffer from problems of memory or time explosion. Simulation-based techniques handle descriptions of any size or complexity, but the efficiency of these techniques is reduced with the increase in the system complexity because of the exponential increase in the number of simulation tests necessary to maintain the coverage. Semi-formal techniques combine the advantages of simulation and formal techniques as they increase the efficiency of simulation-based verification. In this area, several research works have introduced techniques that automate the generation of vectors driven by traditional coverage metrics. However, these techniques do not ensure the detection of 100% of faults. This paper presents a novel technique for the generation of vectors. A major benefit of the technique is the more efficient generation of test-benches than when using techniques based on structural metrics. The technique introduced is more efficient since it relies on a novel coverage metric, which is more directly correlated to functional faults than structural coverage metrics (line, branch, etc.). The proposed coverage metric is based on an abstraction of the system as a set of polynomials where all system behaviours are described by a set of coefficients. By assuming a finite precision of coefficients and a maximum degree of polynomials, all the system behaviors, including both the correct and the incorrect ones, can be modeled. This technique applies mathematical theories (computer algebra and number theory) to calculate the coverage and to generate vectors which maximize coverage. Moreover, in this work, a tool which implements the technique has been developed. This tool takes a C-based system description and provides the coverage and the generated vectors as output

    Initial design and evaluation of automatic restructurable flight control system concepts

    Get PDF
    Results of efforts to develop automatic control design procedures for restructurable aircraft control systems is presented. The restructurable aircraft control problem involves designing a fault tolerance control system which can accommodate a wide variety of unanticipated aircraft failure. Under NASA sponsorship, many of the technologies which make such a system possible were developed and tested. Future work will focus on developing a methodology for integrating these technologies and demonstration of a complete system

    フィールドにおけるテスト印加と低電力論理BISTに関する研究

    Get PDF
    Advances in semiconductor process technology have resulted in various aging issues in field operation of Very Large Scale Integration (VLSI) circuits. For example, HCI (Hot carrier injection), BTI (Bias Temperature Instability), TDDB (Time Dependent Dielectric Breakdown) are well-known aging phenomena, and they can increase the circuit delay resulting in serious reliability problems. In order to avoid system failures caused by aging, recent design usually sets a certain timing margin in operational frequency of the circuit. However, it is difficult to determine the size of the proper timing margin because of the difficulty of prediction of its aging speed in actual use that is related to operational environment. Pessimistic prediction may result in performance sacrificing although it will improve the reliability of the system. BIST-based field test is a promising way to guarantee the reliability of the circuit through detecting the aging-induced faults during the circuit operation. However, the field test has a limitation on test application time, which makes it difficult to achieve high test quality. Therefore an effective test application method at field is required. In addition to the requirement of short test application time, the BIST-based field test requires performing at-speed testing in order to detect timing-related defects. However, it is well known that power dissipation during testing is much higher than that in normal circuit operation. Because excessive power dissipation causes higher IR-drop and higher temperature, it results in delay increase during testing, and in turn, causing false at-speed testing and yield loss. While many low power test methods have been proposed to tackle the test power issue, inadequate test power reduction and lower fault coverage still remain as important issues. Moreover, low power testing that just focuses on power reduction is insufficient. When the test power is reduced to a very low level, a timing-related defect may be missed by the test, and a defective circuit will appear to be a good part passing the test. Therefore, appropriate test power control is necessary though it was out of considering in the existing methods. In this dissertation, we first proposed a new test application to satisfy the limitation of short test application time for BIST-based field test, and then we proposed a new low power BIST scheme that focuses on controlling the test power to a specified value for improving the field test quality. In chapter 3, a new field test application method named “rotating test” is presented in which a set of generated test patterns to detect aging-induced faults is partitioned into several subsets, and apply each subset in one test session at field. In order to maximize the test quality for rotating test, we proposed test partitioning methods that refer to two items: First one aims at maximizing fault coverage of each subset obtained by partitioning. Second one aims at minimizing the detection time interval of all faults in rotating test to avoid system failures. Experimental results demonstrated the effectiveness of the proposed partitioning methods. In chapter 4, we proposed a new low power BIST scheme which can control the scan-in power, scan-out power and capture power while keeping test coverage at high level. In this scheme, a new circuit called pseudo low-pass filter (PLPF) is developed for scan-in power control, and a multi-cycle capture test technique is employed to reduce the capture power. In order to control scan-out power dissipated by test responses, we proposed a novel method that selects some flip-flops in scan chains at logic design phase, and fills the selected flip-flops with proper values before starting scan-shift operation so as to reduce the switching activity associated with scan-out. The experimental results for ISCAS-89 and ITC-99 benchmark circuits show that significant scan-in power reduction rate (the original rate of 50% is reduced to 7~8%) and capture power reduction rate (the original rate of 20% is reduced to 6~7%) were derived. With the scan-out controlling method, the scan-out power can be reduced from 17.2% to 8.4%, which could not be achieved by the conventional methods. Moreover, in order to control the test power to the specified rate to accommodate the various test power requirements. A scan-shift power controlling scheme was also discussed. It showed the capability of controlling any scan-shift toggle rate between 6.7% and 50%.九州工業大学博士学位論文 学位記番号:情工博甲第289号 学位授与年月日:平成26年3月25日1. INTRODUCTION|2. PRELIMINARY|3. BIST-BASED FIELD ROTATING TEST FOR AGING-INDUCED FAULT DETECTION|4. TEST POWER REDUCTION FOR LOGIC-BIST|5. SUMMARY九州工業大学平成25年

    Test vectors reductoin for integrated circuit testing using horizontal hamming distance

    Get PDF
    In testing digital combinational logic for stuck-at faults, it is required to determine the most appropriate test sequence needed to detect the required number of possible faults. The exhaustive test pattern generation method is the simplest approach to implement as it produces test patterns consisting of all possible input combinations of the circuit under test. However, a consequence of this approach is that it results in a large test set when the number of circuit inputs is large. This can take an unnecessarily long time to apply on the circuit under test as during the test process, only a small fraction of all possible test vectors is actually required to produce high percentage of fault coverage. As an alternative, random test pattern generation applies a random set of test patterns which can be used to reduce the number of test patterns compared to exhaustive test. However, both test pattern generation approaches generate unnecessary test vectors to apply to the circuit as multiple patterns typically detect the same fault. Antirandom testing on the other hand ensures that the identified test vectors to use do not detect the same fault by introducing the concept of Hamming distance between test vectors and this distance is be maximized. This results in a reduction in the number of required test vectors when compared to an exhaustive test. However, the algorithm for Antirandom test vector generation is computation intensive and vague in its definition when there are more than one possible next test vectors. In this study, efficient calculation of Hamming distance has been proposed, moreover the choice of the next test vector is addressed by using the proposed Horizontal Hamming distance method which has not yet been explored. The approach effectively detects faults at a much faster rate and produces a much higher fault coverage than the existing Antirandom method

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    A comprehensive comparison between design for testability techniques for total dose testing of flash-based FPGAs

    Get PDF
    Radiation sources exist in different kinds of environments where electronic devices often operate. Correct device operation is usually affected negatively by radiation. The radiation resultant effect manifests in several forms depending on the operating environment of the device like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). CMOS circuits and Floating gate MOS circuits suffer from an increase in the delay and the leakage current due to TID effect. This may damage the proper operation of the integrated circuit. Exhaustive testing is needed for devices operating in harsh conditions like space and military applications to ensure correct operations in the worst circumstances. The use of worst case test vectors (WCTVs) for testing is strongly recommended by MIL-STD-883, method 1019, which is the standard describing the procedure for testing electronic devices under radiation. However, the difficulty of generating these test vectors hinders their use in radiation testing. Testing digital circuits in the industry is usually done nowadays using design for testability (DFT) techniques as they are very mature and can be relied on. DFT techniques include, but not limited to, ad-hoc technique, built-in self test (BIST), muxed D scan, clocked scan and enhanced scan. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Despite all these recommendations for DFT, radiation testing has not benefited from this reliable technology yet. Also, with the big variation in the DFT techniques, choosing the right technique is the bottleneck to achieve the best results for TID testing. In this thesis, a comprehensive comparison between different DFT techniques for TID testing of flash-based FPGAs is made to help designers choose the best suitable DFT technique depending on their application. The comparison includes muxed D scan technique, clocked scan technique and enhanced scan technique. The comparison is done using ISCAS-89 benchmarks circuits. Points of comparisons include FPGA resources utilization, difficulty of designs bring-up, added delay by DFT logic and robust testable paths in each technique

    Application of Max-SAT-based ATPG to optimal cancer therapy design

    Get PDF
    BACKGROUND: Cancer and other gene related diseases are usually caused by a failure in the signaling pathway between genes and cells. These failures can occur in different areas of the gene regulatory network, but can be abstracted as faults in the regulatory function. For effective cancer treatment, it is imperative to identify faults and select appropriate drugs to treat the faults. In this paper, we present an extensible Max-SAT based automatic test pattern generation (ATPG) algorithm for cancer therapy. This ATPG algorithm is based on Boolean Satisfiability (SAT) and utilizes the stuck-at fault model for representing signaling faults. A weighted partial Max-SAT formulation is used to enable efficient selection of the most effective drug. RESULTS: Several usage cases are presented for fault identification and drug selection. These cases include the identification of testable faults, optimal drug selection for single/multiple known faults, and optimal drug selection for overall fault coverage. Experimental results on growth factor (GF) signaling pathways demonstrate that our algorithm is flexible, and can yield an exact solution for each feature in much less than 1 second
    corecore