14 research outputs found

    Scalable diversified antirandom test pattern generation with improved fault coverage for black-box circuit testing

    Get PDF
    Pseudorandom testing is incapable of utilizing the success rate of preceding test patterns while generating subsequent test patterns. Many redundant test patterns have been generated that increase the test length without any significant increase in the fault coverage. An extension to pseudorandom testing is Antirandom that induces divergent patterns by maximizing the Total Hamming Distance (THD) and Total Cartesian Distance (TCD) of every subsequent test pattern. However, the Antirandom test sequence generation algorithm is prone to unsystematic selection when more than one patterns possess maximum THD and TCD. As a result, diversity among test sequences is compromised, lowering the fault coverage. Therefore, this thesis analyses the effect of Hamming distance in vertical as well as horizontal dimension to enhance diversity among test patterns. First contribution of this thesis is the proposal of a Diverse Antirandom (DAR) test pattern generation algorithm. DAR employs Horizontal Total Hamming Distance (HTHD) along with THD and TCD for diversity enhancement among test patterns as maximum distance test pattern generation. The HTHD and TCD are used as distance metrics that increase computational complexity in divergent test sequence generation. Therefore, the second contribution of this thesis is the proposal of tree traversal search method to maximize diversity among test patterns. The proposed method uses bits mutation of a temporary test pattern following a path leading towards maximization of TCD. Results of fault simulations on benchmark circuits have shown that DAR significantly improves the fault coverage up to 18.3% as compared to Antirandom. Moreover, the computational complexity of Antirandom is reduced from exponential O(2n) to linear O(n). Next, the DARalgorithm is modified to ease hardware implementation for on-chip test generation. Therefore, the third contribution of this thesis is the design of a hardware-oriented DAR (HODA) test pattern generator architecture as an alternative to linear feedback shift register (LFSR) that consists of large number of memory elements. Parallel concatenation of the HODA architecture is designed to reduce the number of memory elements by implementing bit slicing architecture. It has been proven through simulation that the proposed architecture has increased fault coverage up to 66% and a reduction of 46.59% gate count compared to the LFSR. Consequently, this thesis presents uniform and scalable test pattern generator architecture for built-in self-test (BIST) applications and solution to maximum distance test pattern generation for high fault coverage in black-box environment

    Design of a 1.9 GHz low-power LFSR circuit using the Reed-Solomon algorithm for Pseudo-Random Test Pattern Generation

    Get PDF
    A linear feedback shift register (LFSR) has been frequently used in the Built-in Self-Test (BIST) designs for the pseudo-random test pattern generation. The volume of the test patterns and test power dissipation are the key features in the large complex designs. The objective of this paper is to propose a new LFSR circuit based on the proposed Reed-Solomon (RS) algorithm. The RS algorithm is created by considering the factors of the maximum length test pattern with a minimum distance over the time. Also, it has achieved an effective generation of test patterns over a stage of complexity order O (m log2 m), where m denotes the total number of message bits. We analyzed our RS LFSR mathematically using the feedback polynomial function for an area-sensitive design. However, the bit-wise stages of the proposed RS LFSR are simulated using the TSMC 130 nm IC design tool in the Mentor Graphics platform. Experimental results showed that the proposed LFSR achieved the effective pseudo-random test patterns with a low-test power dissipation (25.13 ¡W). Ultimately, the circuit has operated in the highest operating frequency (1.9 GHz) environment.   &nbsp

    An approach to Measure Transition Density of Binary Sequences for X-filling based Test Pattern Generator in Scan based Design

    Get PDF
    Switching activity and Transition density computation is an essential stage for dynamic power estimation and testing time reduction. The study of switching activity, transition densities and weighted switching activities of pseudo random binary sequences generated by Linear Feedback shift registers and Feed Forward shift registers plays a crucial role in design approaches of Built-In Self Test, cryptosystems, secure scan designs and other applications. This paper proposed an approach to find transition densities, which plays an important role in choosing of test pattern generator We have analyze conventional and proposed designs using our approache, This work also describes the testing time of benchmark circuits. The outcome of this paper is presented in the form of algorithm, theorems with proofs and analyses table which strongly support the same. The proposed algorithm reduces switching activity and testing time up to 51.56% and 84.61% respectively

    Power Minimisation Techniques for Testing Low Power VLSI Circuits (PhD Dissertation)

    No full text
    Testing low power very large scale integrated (VLSI) circuits has recently become an area of concern due to yield and reliability problems. This dissertation focuses on minimising power dissipation during test application at logic level and register-transfer level (RTL) of abstraction of the VLSI design flow. The first part of this dissertation addresses power minimisation techniques in scan sequential circuits at the logic level of abstraction. A new best primary input change (BPIC) technique based on a novel test application strategy has been proposed. The technique increases the correlation between successive states during shifting in test vectors and shifting out test responses by changing the primary inputs such that the smallest number of transitions is achieved. The new technique is test set dependent and it is applicable to small to medium sized full and partial scan sequential circuits. Since the proposed test application strategy depends only on controlling primary input change time, power is minimised with no penalty in test area, performance, test efficiency, test application time or volume of test data. Furthermore, it is shown that partial scan does not provide only the commonly known benefits such as less test area overhead and test application time, but also less power dissipation during test application when compared to full scan. To achieve power savings in large scan sequential circuits a new test set independent multiple scan chain-based technique which employs a new design for test (DFT) architecture and a novel test application strategy, is presented. The technique has been validated using benchmark examples, and it has been shown that power is minimised with low computational time, low overhead in test area and volume of test data, and with no penalty in test application time, test efficiency, or performance. The second part of this dissertation addresses power minimisation techniques for testing low power VLSI circuits using built-in self-test (BIST) at RTL. First, it is important to overcome the shortcomings associated with traditional BIST methodologies. It is shown how a new BIST methodology for RTL data paths using a novel concept called test compatibility classes (TCC) overcomes high test application time, BIST area overhead, performance degradation, volume of test data, fault-escape probability, and complexity of the testable design space exploration. Second, power minimisation in BIST RTL data paths is achieved by analysing the effect of test synthesis and test scheduling on power dissipation during test application and by employing new power conscious test synthesis and test scheduling algorithms. Third, the new BIST methodology has been validated using benchmark examples. Further, it is shown that when the proposed power conscious test synthesis and test scheduling is combined with novel test compatibility classes simultaneous reduction in test application time and power dissipation is achieved with low overhead in computational time

    On Flip-Flop Selection for Multi-cycle Scan Test with Partial Observation in Logic BIST

    Get PDF
    Multi-cycle test with partial observation for scan-based logic BIST is known as one of effective methods to improve fault coverage without increase of test time. In the method, the selection of flip-flops for partial observation is critical to achieve high fault coverage with small area overhead. This paper proposes a selection method under the limitation to a number of flip-flops. The method consists of structural analysis of CUT and logic simulation of test vectors, therefore, it provides an easy implementation and a good scalability. Experimental results on benchmark circuits show that the method obtains higher fault coverage with less area overhead than the original method. Also the relation between the number of selected flip-flops and fault coverage is investigated.27th IEEE ASIAN TEST SYMPOSIUM (ATS\u2718), 15-18 October 2018, Hefei, Chin

    Embedding deterministic patterns in partial pseudo-exhaustive test

    Get PDF
    The topic of this thesis is related to testing of very large scale integration circuits. The thesis presents the idea of optimizing mixed-mode built-in self-test (BIST) scheme. Mixed-mode BIST consists of two phases. The first phase is pseudo-random testing or partial pseudo-exhaustive testing (P-PET). For the faults not detected by the first phase, deterministic test patterns are generated and applied in the second phase. Hence, the defect coverage of the first phase influences the number of patterns to be generated and stored. The advantages of P-PET in comparison with usual pseudo-random test are in obtaining higher fault coverage and reducing the number of deterministic patterns in the second phase of mixed-mode BIST. Test pattern generation for P-PET is achieved by selecting characteristic polynomials of multiple-polynomial linear feedback shift register (MP-LFSR). In this thesis, the mixed-mode BIST scheme with P-PET in the first phase is further improved in terms of the fault coverage of the first phase. This is achieved by optimization of polynomial selection of P-PET. In usual mixed-mode BIST, the set of undetected by the first phase faults is handled in the second phase by generating deterministic test patterns for them. The method in the thesis is based on consideration of these patterns during polynomial selection. In other words, we are embedding deterministic test patterns in P-PET. In order to solve the problem, the algorithm for the selection of characteristic polynomials covering the pre-generated patterns is developed. The advantages of the proposed approach in terms of the defect coverage and the number of faults left after the first phase are presented using contemporary industrial circuits. A comparison with usual pseudo-random testing is also performed. The results prove the benefits of P-PET with embedded test patterns in terms of the fault coverage, while maintaining comparable test length and time

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    Understanding and Enriching Randomness Within Resource-Constrained Devices

    Get PDF
    Random Number Generators (RNG) find use throughout all applications of computing, from high level statistical modeling all the way down to essential security primitives. A significant amount of prior work has investigated this space, as a poorly performing generator can have significant impacts on algorithms that rely on it. However, recent explosive growth of the Internet of Things (IoT) has brought forth a class of devices for which common RNG algorithms may not provide an optimal solution. Furthermore, new hardware creates opportunities that have not yet been explored with these devices. in this Dissertation, we present research fostering deeper understanding of and enrichment of the state of randomness within the context of resource-constrained devices. First, we present an exploratory study into methods of generating random numbers on devices with sensors. We perform a data collection study across 37 android devices to determine how much random data is consumed, and which sensors are capable of producing sufficiently entropic data. We use the results of our analysis to create an experimental framework called SensoRNG, which serves as a prototype to test the efficacy of a sensor-based RNG. SensoRNG employs opportunistic collection of data from on-board sensors and applies a light-weight mixing algorithm to produce random numbers. We evaluate SensoRNG with the National Institute of Standards and Technology (NIST) statistical testing suite and demonstrate that a sensor-based RNG can provide high quality random numbers with only little additional overhead. Second, we explore the design, implementation, and efficacy of a Collaborative and Distributed Entropy Transfer protocol (CADET), which explores moving random number generation from an individual task to a collaborative one. Through the sharing of excess random data, devices that are unable to meet their own needs can be aided by contributions from other devices. We implement and test a proof-of-concept version of CADET on a testbed of 49 Raspberry Pi 3B single-board computers, which have been underclocked to emulate resource-constrained devices. Through this, we evaluate and demonstrate the efficacy and baseline performance of remote entropy protocols of this type, as well as highlight remaining research questions and challenges. Finally, we design and implement a system called RightNoise, which automatically profiles the RNG activity of a device by using techniques adapted from language modeling. First, by performing offline analysis, RightNoise is able to mine and reconstruct, in the context of a resource-constrained device, the structure of different activities from raw RNG access logs. After recovering these patterns, the device is able to profile its own behavior in real time. We give a thorough evaluation of the algorithms used in RightNoise and show that, with only five instances of each activity type per log, RightNoise is able to reconstruct the full set of activities with over 90\% accuracy. Furthermore, classification is very quick, with an average speed of 0.1 seconds per block. We finish this work by discussing real world application scenarios for RightNoise
    corecore