3,794 research outputs found
Testability enhancement of a basic set of CMOS cells
Testing should be evaluated as the ability of the test patterns to cover realistic faults, and high quality IC products demand high quality testing. We use a test strategy based on physical design for testability (to discover both open and short faults, which are difficult or even impossible to detect). Consequentially, layout level design for testability (LLDFT) rules have been developed, which prevent the faults, or at least reduce the chance of their appearing. The main purpose of this work is to apply a practical set of LLDFT rules to the library cells designed by the Centre Nacional de Microelectrònica (CNM) and obtain a highly testable cell library. The main results of the application of the LLDFT rules (area overheads and performance degradation) are summarized and the results are significant since IC design is highly repetitive; a small effort to improve cell layout can bring about great improvement in design
Building fault detection data to aid diagnostic algorithm creation and performance testing.
It is estimated that approximately 4-5% of national energy consumption can be saved through corrections to existing commercial building controls infrastructure and resulting improvements to efficiency. Correspondingly, automated fault detection and diagnostics (FDD) algorithms are designed to identify the presence of operational faults and their root causes. A diversity of techniques is used for FDD spanning physical models, black box, and rule-based approaches. A persistent challenge has been the lack of common datasets and test methods to benchmark their performance accuracy. This article presents a first of its kind public dataset with ground-truth data on the presence and absence of building faults. This dataset spans a range of seasons and operational conditions and encompasses multiple building system types. It contains information on fault severity, as well as data points reflective of the measurements in building control systems that FDD algorithms typically have access to. The data were created using simulation models as well as experimental test facilities, and will be expanded over time
Memory read faults: taxonomy and automatic test generation
This paper presents an innovative algorithm for the automatic generation of March tests. The proposed approach is able to generate an optimal March test for an unconstrained set of memory faults in very low computation time. Moreover, we propose a new complete taxonomy for memory read faults, a class of faults never carefully addressed in the past
Recommended from our members
Building fault detection and diagnostics: Achieved savings, and methods to evaluate algorithm performance
Fault detection and diagnosis (FDD) represents one of the most active areas of research and commercial product development in the buildings industry. This paper addresses two questions concerning FDD implementation and advancement 1) What are today's users of FDD saving and spending on the technology? 2) What methods and datasets can be used to evaluate and benchmark FDD algorithm performance? Relevant to the first question, 26 organizations that use FDD across a total 550 buildings and 97 M sf achieved median savings of 8%. Twenty-seven FDD users reported that the median base cost for FDD software, annual recurring software cost, and annual labor cost were 2.7 and $8 per monitoring point, with a median implementation size of approximately 1300 points. To address the second question, this paper describes a systematic methodology for evaluating the performance of FDD algorithms, curates an initial test dataset of air handling unit (AHU) system faults, and completes a trial to demonstrate the evaluation process on three sample FDD algorithms. The work provided a first step toward a standard evaluation of different FDD technologies. It showed the test methodology is indeed scalable and repeatable, provided an understanding of the types of insights that can be gained from algorithm performance testing, and highlighted the priorities for further expanding the test dataset
The development of an interim generalized gate logic software simulator
A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability
Detection of hard-to-detect stuck-at faults and generation of their tests based on testability functions
An efficient method is proposed for detecting hard- to-detect stuck-at faults of combinational circuits and generating all tests or one test vector for them. The method is based on the previously proposed efficient methods of constructing the ODNF and ROBDD representations of the observability and stuck-at fault detection Boolean functions, corresponding to a line of the combinational circuit
Product assurance technology for custom LSI/VLSI electronics
The technology for obtaining custom integrated circuits from CMOS-bulk silicon foundries using a universal set of layout rules is presented. The technical efforts were guided by the requirement to develop a 3 micron CMOS test chip for the Combined Release and Radiation Effects Satellite (CRRES). This chip contains both analog and digital circuits. The development employed all the elements required to obtain custom circuits from silicon foundries, including circuit design, foundry interfacing, circuit test, and circuit qualification
The Design of Fail-Safe Logic
This paper examines the behavior of digital logic families, specifically identifying the properties and characteristics of digital fail-safe logic. Fail-safe digital design is examined utilizing classical logic and semiconductor theory. The effects of failures internal to the structure of digital integrated circuits are analyzed and a discussion of pertinent logic design is presented. The techniques to detect all types of multiple failure modes are examined. With these results, a method of design for fail-safe logic is presented and analyzed
Investigation of the Prevalence of Faults in the Heating, Ventilation, and Air-Conditioning Systems of Commercial Buildings
This dissertation describes a large-scale investigation of heating, ventilation, and air-conditioning (HVAC) fault prevalence in commercial buildings in the United States. A multi-year dataset with 36,556 pieces of HVAC equipment including air handling units (AHUs), air terminal units (ATUs), and packaged rooftop units (RTUs) was analyzed to determine values for several HVAC fault prevalence metrics. The primary source of data for this study comes from three commercial fault detection and diagnostics (FDD) providers. Since each FDD provider uses different terms to refer to the same fault in an HVAC system, a mapping function was created for each FDD provider’s dataset, to convert the fault reports to a single standardized fault identifier. The fault identifier is taken from a standard taxonomy that was created for this purpose.
Since the commercial FDD software outputs are inherently subject to some level of error, i.e., they could have false negatives and false positives, a field study was conducted to gain greater insight into the commercial FDD software results. Two buildings from among the buildings of one of the FDD providers were selected. The RTUs serving these two buildings were monitored for about two weeks using our installed data loggers. The actual faults in these buildings were identified using methods that we developed or selected from the literature. The results of the field study were compared with the FDD provider fault reports.
This study also proposes a data-driven FDD strategy for RTUs, using machine learning classification methods. The FDD task is formulated as a multi-class classification problem. Seven typical RTU faults are discriminated against one another as well as the normal condition. Nine classification methods were applied to a dataset of simulation data, which was split into a training set and a test set. The performance of the classifiers for individual faults was characterized using true positive rate and false positive rate statistical measures. The relative importance of input variables was analyzed, and is also discussed in the dissertation.
Advisor: David Yuil
- …