2,540 research outputs found

    Delay test for diagnosis of power switches

    Get PDF
    Power switches are used as part of power-gating technique to reduce leakage power of a design. To the best of our knowledge, this is the first work in open-literature to show a systematic diagnosis method for accurately diagnosingpower switches. The proposed diagnosis method utilizes recently proposed DFT solution for efficient testing of power switches in the presence of PVT variation. It divides power switches into segments such that any faulty power switch is detectable thereby achieving high diagnosis accuracy. The proposed diagnosis method has been validated through SPICE simulation using a number of ISCAS benchmarks synthesized with a 90-nm gate library. Simulation results show that when considering the influence of process variation, the worst case loss of accuracy is less than 4.5%; and the worst case loss of accuracy is less than 12% when considering VT (Voltage and Temperature) variations

    Diagnosis of power switches with power-distribution-network consideration

    Get PDF
    This paper examines diagnosis of power switches when the power-distribution-network (PDN) is considered as a high resolution distributed electrical model. The analysis shows that for a diagnosis method to perform high diagnosis accuracy and resolution, the distributed nature of PDN should not be simplified by a lumped model. For this reason, a PDN-aware diagnosis method for power switches fault grading is proposed. The proposed method utilizes a novel signature generation design-for-testability (DFT) unit, the signatures of which are processed by a novel diagnosis algorithm that grades the magnitude of faults. Through simulations of physical layout SPICE models, we explore the trade-offs of the proposed method between diagnosis accuracy and diagnosis resolution against area overhead and we show that 100% diagnosis accuracy and up to 98% diagnosis resolution can be achieved with negligible cost

    Reducing Library Characterization Time for Cell-aware Test while Maintaining Test Quality

    Get PDF
    Cell-aware test (CAT) explicitly targets faults caused by defects inside library cells to improve test quality, compared with conventional automatic test pattern generation (ATPG) approaches, which target faults only at the boundaries of library cells. The CAT methodology consists of two stages. Stage 1, based on dedicated analog simulation, library characterization per cell identifies which cell-level test pattern detects which cell-internal defect; this detection information is encoded in a defect detection matrix (DDM). In Stage 2, with the DDMs as inputs, cell-aware ATPG generates chip-level test patterns per circuit design that is build up of interconnected instances of library cells. This paper focuses on Stage 1, library characterization, as both test quality and cost are determined by the set of cell-internal defects identified and simulated in the CAT tool flow. With the aim to achieve the best test quality, we first propose an approach to identify a comprehensive set, referred to as full set, of potential open- and short-defect locations based on cell layout. However, the full set of defects can be large even for a single cell, making the time cost of the defect simulation in Stage 1 unaffordable. Subsequently, to reduce the simulation time, we collapse the full set to a compact set of defects which serves as input of the defect simulation. The full set is stored for the diagnosis and failure analysis. With inspecting the simulation results, we propose a method to verify the test quality based on the compact set of defects and, if necessary, to compensate the test quality to the same level as that based on the full set of defects. For 351 combinational library cells in Cadence’s GPDK045 45nm library, we simulate only 5.4% defects from the full set to achieve the same test quality based on the full set of defects. In total, the simulation time, via linear extrapolation per cell, would be reduced by 96.4% compared with the time based on the full set of defects

    Time-efficient fault detection and diagnosis system for analog circuits

    Get PDF
    Time-efficient fault analysis and diagnosis of analog circuits are the most important prerequisites to achieve online health monitoring of electronic equipments, which are involving continuing challenges of ultra-large-scale integration, component tolerance, limited test points but multiple faults. This work reports an FPGA (field programmable gate array)-based analog fault diagnostic system by applying two-dimensional information fusion, two-port network analysis and interval math theory. The proposed system has three advantages over traditional ones. First, it possesses high processing speed and smart circuit size as the embedded algorithms execute parallel on FPGA. Second, the hardware structure has a good compatibility with other diagnostic algorithms. Third, the equipped Ethernet interface enhances its flexibility for remote monitoring and controlling. The experimental results obtained from two realistic example circuits indicate that the proposed methodology had yielded competitive performance in both diagnosis accuracy and time-effectiveness, with about 96% accuracy while within 60 ms computational time.Peer reviewedFinal Published versio

    Optimization of Cell-Aware Test

    Get PDF

    Optimization of Cell-Aware Test

    Get PDF

    DFT Architecture with Power-Distribution-Network Consideration for Delay-based Power Gating Test

    Get PDF
    This paper shows that existing delay-based testing techniques for power gating exhibit both fault coverage and yield loss due to deviations at the charging delay introduced by the distributed nature of the power-distribution-networks (PDNs). To restore this test quality loss, which could reach up to 67.7% of false passes and 25% of false fails due to stuck-open faults, we propose a design-for-testability (DFT) logic that accounts for a distributed PDN. The proposed logic is optimized by an algorithm that also handles uncertainty due to process variations and offers trade-off flexibility between test-application time and area cost. A calibration process is proposed to bridge model-to-hardware discrepancies and increase test quality when considering systematic variations. Through SPICE simulations, we show complete recovery of the test quality lost due to PDNs. The proposed method is robust sustaining 80.3% to 98.6% of the achieved test quality under high random and systematic process variations. To the best of our knowledge, this paper presents the first analysis of the PDN impact on test quality and offers a unified test solution for both ring and grid power gating styles

    Development of new fault detection methods for rotating machines (roller bearings)

    Get PDF
    Abstract Early fault diagnosis of roller bearings is extremely important for rotating machines, especially for high speed, automatic and precise machines. Many research efforts have been focused on fault diagnosis and detection of roller bearings, since they constitute one the most important elements of rotating machinery. In this study a combination method is proposed for early damage detection of roller bearing. Wavelet packet transform (WPT) is applied to the collected data for denoising and the resulting clean data are break-down into some elementary components called Intrinsic mode functions (IMFs) using Ensemble empirical mode decomposition (EEMD) method. The normalized energy of three first IMFs are used as input for Support vector machine (SVM) to recognize whether signals are sorting out from healthy or faulty bearings. Then, since there is no robust guide to determine amplitude of added noise in EEMD technique, a new Performance improved EEMD (PIEEMD) is proposed to determine the appropriate value of added noise. A novel feature extraction method is also proposed for detecting small size defect using Teager-Kaiser energy operator (TKEO). TKEO is applied to IMFs obtained to create new feature vectors as input data for one-class SVM. The results of applying the method to acceleration signals collected from an experimental bearing test rig demonstrated that the method can be successfully used for early damage detection of roller bearings. Most of the diagnostic methods that have been developed up to now can be applied for the case stationary working conditions only (constant speed and load). However, bearings often work at time-varying conditions such as wind turbine supporting bearings, mining excavator bearings, vehicles, robots and all processes with run-up and run-down transients. Damage identification for bearings working under non-stationary operating conditions, especially for early/small defects, requires the use of appropriate techniques, which are generally different from those used for the case of stationary conditions, in order to extract fault-sensitive features which are at the same time insensitive to operational condition variations. Some methods have been proposed for damage detection of bearings working under time-varying speed conditions. However, their application might increase the instrumentation cost because of providing a phase reference signal. Furthermore, some methods such as order tracking methods still can be applied when the speed variation is limited. In this study, a novel combined method based on cointegration is proposed for the development of fault features which are sensitive to the presence of defects while in the same time they are insensitive to changes in the operational conditions. It does not require any additional measurements and can identify defects even for considerable speed variations. The signals acquired during run-up condition are decomposed into IMFs using the performance improved EEMD method. Then, the cointegration method is applied to the intrinsic mode functions to extract stationary residuals. The feature vectors are created by applying the Teager-Kaiser energy operator to the obtained stationary residuals. Finally, the feature vectors of the healthy bearing signals are utilized to construct a separating hyperplane using one-class support vector machine. Eventually the proposed method was applied to vibration signals measured on an experimental bearing test rig. The results verified that the method can successfully distinguish between healthy and faulty bearings even if the shaft speed changes dramatically

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
    corecore