7,896 research outputs found

    Transition Faults and Transition Path Delay Faults: Test Generation, Path Selection, and Built-In Generation of Functional Broadside Tests

    Get PDF
    As the clock frequency and complexity of digital integrated circuits increase rapidly, delay testing is indispensable to guarantee the correct timing behavior of the circuits. In this dissertation, we describe methods developed for three aspects of delay testing in scan-based circuits: test generation, path selection and built-in test generation. We first describe a deterministic broadside test generation procedure for a path delay fault model named the transition path delay fault model, which captures both large and small delay defects. Under this fault model, a path delay fault is detected only if all the individual transition faults along the path are detected by the same test. To reduce the complexity of test generation, sub-procedures with low complexity are applied before a complete branch-and-bound procedure. Next, we describe a method based on static timing analysis to select critical paths for test generation. Logic conditions that are necessary for detecting a path delay fault are considered to refine the accuracy of static timing analysis, using input necessary assignments. Input necessary assignments are input values that must be assigned to detect a fault. The method calculates more accurate path delays, selects paths that are critical during test application, and identifies undetectable path delay faults. These two methods are applicable to off-line test generation. For large circuits with high complexity and frequency, built-in test generation is a cost-effective method for delay testing. For a circuit that is embedded in a larger design, we developed a method for built-in generation of functional broadside tests to avoid excessive power dissipation during test application and the overtesting of delay faults, taking the functional constraints on the primary input sequences of the circuit into consideration. Functional broadside tests are scan-based two-pattern tests for delay faults that create functional operation conditions during test application. To avoid the potential fault coverage loss due to the exclusive use of functional broadside tests, we also developed an optional DFT method based on state holding to improve fault coverage. High delay fault coverage can be achieved by the developed method for benchmark circuits using simple hardware

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    Achieve complete robust path delay fault testability

    Get PDF
    Recently, Pomeranz and Reddy [7], presented a test point insertion method to improve path delay fault testability in large combinational circuits. A test application scheme was developed that allows test points to be utilized as primary inputs and primary outputs during testing. The placement of test points was guided by the number of paths and was aimed at reducing this number. Indirectly, this approach achieved complete robust path delay fault testability in very low computation times. In this paper, we use their test application scheme, however, we use morre exact measures for guiding test point insertion like test generation and RD fault identification. Thus, we reduce the number of test point needed to achieve complete testability by ensuring that test points are inserted only on paths associated with path delay faults that are necessary to be tested and that are not robustly testable. Experimental results show that an average reduction of about 70% in the number of test points over the approach of [7] can be obtained.

    A bi-objective genetic algorithm approach to risk mitigation in project scheduling

    Get PDF
    A problem of risk mitigation in project scheduling is formulated as a bi-objective optimization problem, where the expected makespan and the expected total cost are both to be minimized. The expected total cost is the sum of four cost components: overhead cost, activity execution cost, cost of reducing risks and penalty cost for tardiness. Risks for activities are predefined. For each risk at an activity, various levels are defined, which correspond to the results of different preventive measures. Only those risks with a probable impact on the duration of the related activity are considered here. Impacts of risks are not only accounted for through the expected makespan but are also translated into cost and thus have an impact on the expected total cost. An MIP model and a heuristic solution approach based on genetic algorithms (GAs) is proposed. The experiments conducted indicate that GAs provide a fast and effective solution approach to the problem. For smaller problems, the results obtained by the GA are very good. For larger problems, there is room for improvement

    Techniques for Improving Security and Trustworthiness of Integrated Circuits

    Get PDF
    The integrated circuit (IC) development process is becoming increasingly vulnerable to malicious activities because untrusted parties could be involved in this IC development flow. There are four typical problems that impact the security and trustworthiness of ICs used in military, financial, transportation, or other critical systems: (i) Malicious inclusions and alterations, known as hardware Trojans, can be inserted into a design by modifying the design during GDSII development and fabrication. Hardware Trojans in ICs may cause malfunctions, lower the reliability of ICs, leak confidential information to adversaries or even destroy the system under specifically designed conditions. (ii) The number of circuit-related counterfeiting incidents reported by component manufacturers has increased significantly over the past few years with recycled ICs contributing the largest percentage of the total reported counterfeiting incidents. Since these recycled ICs have been used in the field before, the performance and reliability of such ICs has been degraded by aging effects and harsh recycling process. (iii) Reverse engineering (RE) is process of extracting a circuit’s gate-level netlist, and/or inferring its functionality. The RE causes threats to the design because attackers can steal and pirate a design (IP piracy), identify the device technology, or facilitate other hardware attacks. (iv) Traditional tools for uniquely identifying devices are vulnerable to non-invasive or invasive physical attacks. Securing the ID/key is of utmost importance since leakage of even a single device ID/key could be exploited by an adversary to hack other devices or produce pirated devices. In this work, we have developed a series of design and test methodologies to deal with these four challenging issues and thus enhance the security, trustworthiness and reliability of ICs. The techniques proposed in this thesis include: a path delay fingerprinting technique for detection of hardware Trojans, recycled ICs, and other types counterfeit ICs including remarked, overproduced, and cloned ICs with their unique identifiers; a Built-In Self-Authentication (BISA) technique to prevent hardware Trojan insertions by untrusted fabrication facilities; an efficient and secure split manufacturing via Obfuscated Built-In Self-Authentication (OBISA) technique to prevent reverse engineering by untrusted fabrication facilities; and a novel bit selection approach for obtaining the most reliable bits for SRAM-based physical unclonable function (PUF) across environmental conditions and silicon aging effects

    Project network models with discounted cash flows. A guided tour through recent developments.

    Get PDF
    The vast majority of the project scheduling methodologies presented in the literature have been developed with the objective of minimizing the project duration subject to precedence and other constraints. In doing so, the financial aspects of project management are largely ignored. Recent efforts have taken into account discounted cash flow and have focused on the maximalization of the net present value (npv) of the project as the more appropriate objective. In this paper we offer a guided tour through the important recent developments in the expanding field of research on deterministic and stochastic project network models with discounted cash flows. Subsequent to a close examination of the rationale behind the npv objective, we offer a taxonomy of the problems studied in the literature and critically review the major contributions. Proper attention is given to npv maximization models for the unconstrained scheduling problem with known cash flows, optimal and suboptimal scheduling procedures with various types of resource constraints, and the problem of determining both the timing and amount of payments.Scheduling; Models; Model; Discounted cash flow; Cash flow; Project scheduling; Project management; Management; Net present value; Value; Problems; Maximization; Optimal;

    Automating defects simulation and fault modeling for SRAMs

    Get PDF
    The continues improvement in manufacturing process density for very deep sub micron technologies constantly leads to new classes of defects in memory devices. Exploring the effect of fabrication defects in future technologies, and identifying new classes of realistic functional fault models with their corresponding test sequences, is a time consuming task up to now mainly performed by hand. This paper proposes a new approach to automate this procedure. The proposed method exploits the capabilities of evolutionary algorithms to automatically identify faulty behaviors into defective memories and to define the corresponding fault models and relevant test sequences. Target defects are modeled at the electrical level in order to optimize the results to the specific technology and memory architecture

    A complete design path for the layout of flexible macros

    Get PDF
    XIV+172hlm.;24c
    corecore