84 research outputs found

    Custom Integrated Circuits

    Get PDF
    Contains reports on nine research projects.Analog Devices, Inc.International Business Machines CorporationJoint Services Electronics Program Contract DAAL03-89-C-0001U.S. Air Force - Office of Scientific Research Contract AFOSR 86-0164BDuPont CorporationNational Science Foundation Grant MIP 88-14612U.S. Navy - Office of Naval Research Contract N00014-87-K-0825American Telephone and TelegraphDigital Equipment CorporationNational Science Foundation Grant MIP 88-5876

    Methods and Systems for Fault Diagnosis in Nuclear Power Plants

    Get PDF
    This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed. A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model. A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system. A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF. For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system. To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP. The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research. In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified

    Fault diagnosis of hybrid systems with applications to gas turbine engines

    Get PDF
    Stringent reliability and maintainability requirements for modern complex systems demand the development of systematic methods for fault detection and isolation. Many of such complex systems can be modeled as hybrid automata. In this thesis, a novel framework for fault diagnosis of hybrid automata is presented. Generally, in a hybrid system, two types of sensors may be available, namely: continuous sensors supplying continuous-time readings (i.e., real numbers) and threshold sensitive (discrete) sensors supplying discrete outputs (e.g., level high and pressure low). It is assumed that a bank of residual generators (detection filters) designed based on the continuous model of the plant is available. In the proposed framework, each residual generator is modeled by a Discrete-Event System (DES). Then, these DES models are integrated with the DES model of the hybrid system to build an Extended DES model. A "hybrid" diagnoser is then constructed based on the extended DES model. The "hybrid" diagnoser effectively combines the readings of discrete sensors and the information supplied by residual generators (which is based on continuous sensors) to determine the health status of the hybrid system. The problem of diagnosability of failure modes in hybrid automata is also studied here. A notion of failure diagnosability in hybrid automata is introduced and it is shown that for the diagnosability of a failure mode in a hybrid automaton, it is sufficient that the failure mode be diagnosable in the extended DES model developed for representing the hybrid automaton and residual generators. The diagnosability of failure modes in the case that some residual generators produce unreliable outputs in the form of false alarm or false silence signals is also investigated. Moreover, the problem of isolator (residual generator) selection is examined and approaches are developed for computing a minimal set of isolators to ensure the diagnosability of failure modes. The proposed hybrid diagnosis approach is employed for investigating faults in the fuel supply system and the nozzle actuator of a single-spool turbojet engine with an afterburner. A hybrid automaton model is obtained for the engine. A bank of residual generators is also designed, and an extended DES is constructed for the engine. Based on the extended DES model, a hybrid diagnoser is constructed and developed. The faults diagnosable by a purely DES diagnoser or by methods based on residual generators alone are also diagnosable by the hybrid diagnoser. Moreover, we have shown that there are faults (or groups of faults) in the fuel supply system and the nozzle actuator that can be isolated neither by a purely DES diagnoser nor by methods based on residual generators alone. However, these faults (or groups of faults) can be isolated if the hybrid diagnoser is used

    NASA Tech Briefs, January 2009

    Get PDF
    Tech Briefs are short announcements of innovations originating from research and development activities of the National Aeronautics and Space Administration. They emphasize information considered likely to be transferable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. Topics covered include: The Radio Frequency Health Node Wireless Sensor System; Effects of Temperature on Polymer/Carbon Chemical Sensors; Small CO2 Sensors Operate at Lower Temperature; Tele-Supervised Adaptive Ocean Sensor Fleet; Synthesis of Submillimeter Radiation for Spectroscopy; 100-GHz Phase Switch/Mixer Containing a Slot-Line Transition; Generating Ka-Band Signals Using an X-Band Vector Modulator; SiC Optically Modulated Field-Effect Transistor; Submillimeter-Wave Amplifier Module with Integrated Waveguide Transitions; Metrology System for a Large, Somewhat Flexible Telescope; Economical Implementation of a Filter Engine in an FPGA; Improved Joining of Metal Components to Composite Structures; Machined Titanium Heat-Pipe Wick Structure; Gadolinia-Doped Ceria Cathodes for Electrolysis of CO2; Utilizing Ocean Thermal Energy in a Submarine Robot; Fuel-Cell Power Systems Incorporating Mg-Based H2 Generators; Alternative OTEC Scheme for a Submarine Robot; Sensitive, Rapid Detection of Bacterial Spores; Adenosine Monophosphate-Based Detection of Bacterial Spores; Silicon Microleaks for Inlets of Mass Spectrometers; CGH Figure Testing of Aspherical Mirrors in Cold Vacuums; Series-Coupled Pairs of Silica Microresonators; Precise Stabilization of the Optical Frequency of WGMRs; Formation Flying of Components of a Large Space Telescope; Laser Metrology Heterodyne Phase-Locked Loop; Spatial Modulation Improves Performance in CTIS; High-Performance Algorithm for Solving the Diagnosis Problem; Truncation Depth Rule-of-Thumb for Convolutional Codes; Efficient Method for Optimizing Placement of Sensors

    Active Fault Tolerant Control of Livestock Stable Ventilation System

    Get PDF

    Scalable fault management architecture for dynamic optical networks : an information-theoretic approach

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.MIT Barker Engineering Library copy: printed in pages.Also issued printed in pages.Includes bibliographical references (leaves 255-262).All-optical switching, in place of electronic switching, of high data-rate lightpaths at intermediate nodes is one of the key enabling technologies for economically scalable future data networks. This replacement of electronic switching with optical switching at intermediate nodes, however, presents new challenges for fault detection and localization in reconfigurable all-optical networks. Presently, fault detection and localization techniques, as implemented in SONET/G.709 networks, rely on electronic processing of parity checks at intermediate nodes. If similar techniques are adapted to all-optical reconfigurable networks, optical signals need to be tapped out at intermediate nodes for parity checks. This additional electronic processing would break the all-optical transparency paradigm and thus significantly diminish the cost advantages of all-optical networks. In this thesis, we propose new fault-diagnosis approaches specifically tailored to all-optical networks, with an objective of keeping the diagnostic capital expenditure and the diagnostic operation effort low. Instead of the aforementioned passive monitoring paradigm based on parity checks, we propose a proactive lightpath probing paradigm: optical probing signals are sent along a set of lightpaths in the network, and network state (i.e., failure pattern) is then inferred from testing results of this set of end-to-end lightpath measurements. Moreover, we assume that a subset of network nodes (up to all the nodes) is equipped with diagnostic agents - including both transmitters/receivers for probe transmission/detection and software processes for probe management to perform fault detection and localization. The design objectives of this proposed proactive probing paradigm are two folded: i) to minimize the number of lightpath probes to keep the diagnostic operational effort low, and ii) to minimize the number of diagnostic hardware to keep the diagnostic capital expenditure low.(cont.) The network fault-diagnosis problem can be mathematically modeled with a group testing-over-graphs framework. In particular, the network is abstracted as a graph in which the failure status of each node/link is modeled with a random variable (e.g. Bernoulli distribution). A probe over any path in the graph results in a value, defined as the probe syndrome, which is a function of all the random variables associated in that path. A network failure pattern is inferred through a set of probe syndromes resulting from a set of optimally chosen probes. This framework enriches the traditional group-testing problem by introducing a topological structure, and can be extended to model many other network-monitoring problems (e.g., packet delay, packet drop ratio, noise and etc) by choosing appropriate state variables. Under the group-testing-over-graphs framework with a probabilistic failure model, we initiate an information-theoretic approach to minimizing the average number of lightpath probes to identify all possible network failure patterns. Specifically, we have established an isomorphic mapping between the fault-diagnosis problem in network management and the source-coding problem in Information Theory. This mapping suggests that the minimum average number of lightpath probes required is lower bounded by the information entropy of the network state and efficient source-coding algorithms (e.g. the run-length code) can be translated into scalable fault-diagnosis schemes under some additional probe feasibility constraint. Our analytical and numerical investigations yield a guideline for designing scalable fault-diagnosis algorithms: each probe should provide approximately 1-bit of state information, and thus the total number of probes required is approximately equal to the entropy of the network state.(cont.) To address the hardware cost of diagnosis, we also developed a probabilistic analysis framework to characterize the trade-off between hardware cost (i.e., the number of nodes equipped with Tx/Rx pairs) and diagnosis capability (i.e., the probability of successful failure detection and localization). Our results suggest that, for practical situations, the hardware cost can be reduced significantly by accepting a small amount of uncertainty about the failure status.by Yonggang Wen.Ph.D

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Big Data Analytics for Complex Systems

    Get PDF
    The evolution of technology in all fields led to the generation of vast amounts of data by modern systems. Using data to extract information, make predictions, and make decisions is the current trend in artificial intelligence. The advancement of big data analytics tools made accessing and storing data easier and faster than ever, and machine learning algorithms help to identify patterns in and extract information from data. The current tools and machines in health, computer technologies, and manufacturing can generate massive raw data about their products or samples. The author of this work proposes a modern integrative system that can utilize big data analytics, machine learning, super-computer resources, and industrial health machines’ measurements to build a smart system that can mimic the human intelligence skills of observations, detection, prediction, and decision-making. The applications of the proposed smart systems are included as case studies to highlight the contributions of each system. The first contribution is the ability to utilize big data revolutionary and deep learning technologies on production lines to diagnose incidents and take proper action. In the current digital transformational industrial era, Industry 4.0 has been receiving researcher attention because it can be used to automate production-line decisions. Reconfigurable manufacturing systems (RMS) have been widely used to reduce the setup cost of restructuring production lines. However, the current RMS modules are not linked to the cloud for online decision-making to take the proper decision; these modules must connect to an online server (super-computer) that has big data analytics and machine learning capabilities. The online means that data is centralized on cloud (supercomputer) and accessible in real-time. In this study, deep neural networks are utilized to detect the decisive features of a product and build a prediction model in which the iFactory will make the necessary decision for the defective products. The Spark ecosystem is used to manage the access, processing, and storing of the big data streaming. This contribution is implemented as a closed cycle, which for the best of our knowledge, no one in the literature has introduced big data analysis using deep learning on real-time applications in the manufacturing system. The code shows a high accuracy of 97% for classifying the normal versus defective items. The second contribution, which is in Bioinformatics, is the ability to build supervised machine learning approaches based on the gene expression of patients to predict proper treatment for breast cancer. In the trial, to personalize treatment, the machine learns the genes that are active in the patient cohort with a five-year survival period. The initial condition here is that each group must only undergo one specific treatment. After learning about each group (or class), the machine can personalize the treatment of a new patient by diagnosing the patients’ gene expression. The proposed model will help in the diagnosis and treatment of the patient. The future work in this area involves building a protein-protein interaction network with the selected genes for each treatment to first analyze the motives of the genes and target them with the proper drug molecules. In the learning phase, a couple of feature-selection techniques and supervised standard classifiers are used to build the prediction model. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges around 100%. The third contribution is the ability to build semi-supervised learning for the breast cancer survival treatment that advances the second contribution. By understanding the relations between the classes, we can design the machine learning phase based on the similarities between classes. In the proposed research, the researcher used the Euclidean matrix distance among each survival treatment class to build the hierarchical learning model. The distance information that is learned through a non-supervised approach can help the prediction model to select the classes that are away from each other to maximize the distance between classes and gain wider class groups. The performance measurement of this approach shows a slight improvement from the second model. However, this model reduced the number of discriminative genes from 47 to 37. The model in the second contribution studies each class individually while this model focuses on the relationships between the classes and uses this information in the learning phase. Hierarchical clustering is completed to draw the borders between groups of classes before building the classification models. Several distance measurements are tested to identify the best linkages between classes. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges from 90% to 100%. All the case study models showed high-performance measurements in the prediction phase. These modern models can be replicated for different problems within different domains. The comprehensive models of the newer technologies are reconfigurable and modular; any newer learning phase can be plugged-in at both ends of the learning phase. Therefore, the output of the system can be an input for another learning system, and a newer feature can be added to the input to be considered for the learning phase
    • …
    corecore