4,850 research outputs found

    Depth estimation of inner wall defects by means of infrared thermography

    Get PDF
    There two common methods dealing with interpreting data from infrared thermography: qualitatively and quantitatively. On a certain condition, the first method would be sufficient, but for an accurate interpretation, one should undergo the second one. This report proposes a method to estimate the defect depth quantitatively at an inner wall of petrochemical furnace wall. Finite element method (FEM) is used to model multilayer walls and to simulate temperature distribution due to the existence of the defect. Five informative parameters are proposed for depth estimation purpose. These parameters are the maximum temperature over the defect area (Tmax-def), the average temperature at the right edge of the defect (Tavg-right), the average temperature at the left edge of the defect (Tavg-left), the average temperature at the top edge of the defect (Tavg-top), and the average temperature over the sound area (Tavg-so). Artificial Neural Network (ANN) was trained with these parameters for estimating the defect depth. Two ANN architectures, Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) network were trained for various defect depths. ANNs were used to estimate the controlled and testing data. The result shows that 100% accuracy of depth estimation was achieved for the controlled data. For the testing data, the accuracy was above 90% for the MLP network and above 80% for the RBF network. The results showed that the proposed informative parameters are useful for the estimation of defect depth and it is also clear that ANN can be used for quantitative interpretation of thermography data

    Non-contact Microelectronic Device Inspection Systems And Methods

    Get PDF
    Non-contact microelectronic device inspection systems and methods are discussed and provided. Some embodiments include a method of generating a virtual reference device (or chip). This approach uses a statistics to find devices in a sample set that are most similar and then averages their time domain signals to generate the virtual reference. Signals associated with the virtual reference can then be correlated with time domain signals obtained from the packages under inspection to obtain a quality signature. Defective and non-defective devices are separated by estimating a beta distribution that fits a quality signature histogram of inspected packages and determining a cutoff threshold for an acceptable quality signature. Other aspects, features, and embodiments are also claimed and described.Georgia Tech Research Corporatio

    Project Cost Contingency Estimation Modeling Using Risk Analysis and Fuzzy Expert System

    Get PDF
    Determination of the appropriate project cost contingency, especially during the tendering stage is very important to ensure a successful bidding of the project. Setting too high a cost contingency will not make the tender look competitive, while putting too low will not cover risks that may cause cost overrun during the construction. Traditionally, contractors estimate cost contingency based on subjective judgment, such as 5-10% from the base cost estimated by considering past similar project. This method is typically derived from intuition, past experience and historical data. However, such method does not have a sound basis and is difficult to justify or defend. More objective methods for estimating project cost contingency have been presented. However, most of the methods still rely on formal modeling techniques which sometimes require the user to have knowledge and familiarity with statistical techniques. This research proposes a method to estimate cost contingency using a flexible and rational approach based on risk analysis and fuzzy expert system concept. This method could accommodate contractors’ subjective judgment and also the use of risk analysis and management concept in the analysis process. The proposed method involved the development of cost contingency model for building and infrastructure projects in Malaysia. To develop the model, a number of common risk factors were identified from the literature. Data and information from the literature were also acquired to specify fuzzy expert system properties, such as membership function, rule base and fuzzy inference mechanism. The fuzzy expert system was developed using scenarios to predict percentage cost contingency allocation. The scenarios were then validated using three case projects by conducting face to face interviews with the project managers. From the validation, it was found that the predictions given by the system were within 20% accuracy compared to actual cost contingencies. A computer program was also developed using MATLAB software to demonstrate the model’s application in estimating tender price during the bidding stage

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    An Integrated Test Plan for an Advanced Very Large Scale Integrated Circuit Design Group

    Get PDF
    VLSI testing poses a number of problems which includes the selection of test techniques, the determination of acceptable fault coverage levels, and test vector generation. Available device test techniques are examined and compared. Design rules should be employed to assure the design is testable. Logic simulation systems and available test utilities are compared. The various methods of test vector generation are also examined. The selection criteria for test techniques are identified. A table of proposed design rules is included. Testability measurement utilities can be used to statistically predict the test generation effort. Field reject rates and fault coverage are statistically related. Acceptable field reject rates can be achieved with less than full test vector fault coverage. The methods and techniques which are examined form the basis of the recommended integrated test plan. The methods of automatic test vector generation are relatively primitive but are improving

    Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    Get PDF
    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis

    One-Class Classification: Taxonomy of Study and Review of Techniques

    Full text link
    One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Améliorations de méthodes de localisation de défauts pour les réseaux de distribution électrique

    Get PDF
    This thesis proposes to improve fault localization methods for electricalpower distribution networks. Transmission networks were quickly equipped with protectionand fault localization equipments. Indeed, faults on the transmission network need tobe dealt with quickly in order to avoid serious consequences. Unlike transmission networks,distribution networks have a minimal protection scheme. The smart grid developmentsbring new possibilities with the installation of new equipments giving access to many newvariables. The work presented in this thesis develop two fault localization method. Thefirst aims in using the equipment already installed (fault indicators) in order to isolatequickly and efficiently the zone concerned by the fault. The second method performs aprecise localization (in distance) of the different possible fault locations from the electricalmeasurements made on the network.Ces travaux proposent des améliorations de méthodes de localisation desdéfauts électriques sur les réseaux électriques de distribution. Les réseaux de transportont rapidement été instrumenté en élément de protection. En effet, un incident survenantsur le réseau de transport peut entrainer de graves conséquences s’il n’est pas traité rapidement.Les réseaux de distribution quand à eux possèdent un schéma de protectionminimal. Cependant le développement des smart grids (ou réseaux intelligents) amène denouvelles possibilités avec l’ajout d’équipements de mesures sur le réseau de distribution.Les travaux présentés dans cette thèse développent deux méthodes de localisation de défaut.La première permet de mieux utiliser l’équipement déjà en place (indicateurs depassage de défaut) afin d’isoler de manière rapide et fiable la zone concernée par le défaut.La deuxième permet une localisation précise (en distance) des différents lieux de défautspossibles à partir de mesures électriques
    • …
    corecore