97 research outputs found

    Methods and Systems for Fault Diagnosis in Nuclear Power Plants

    Get PDF
    This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed. A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model. A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system. A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF. For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system. To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP. The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research. In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified

    Selection of sensors by a new methodology coupling a classification technique and entropy criteria

    Get PDF
    Complex industrial processes invest a lot of money in sensors and automation devices to monitor and supervise the process in order to guarantee the production quality and the plant and operators safety. Fault detection is one of the multiple tasks of process monitoring and it critically depends on the sensors that measure the significant process variables. Nevertheless, most of the works on fault detection and diagnosis found in literature emphasis more on developing procedures to perform diagnosis given a set of sensors, and less on determining the actual location of sensors for efficient identification of faults. A methodology based on learning and classification techniques and on the information quantity measured by the Entropy concept, is proposed in order to address the problem of sensor location for fault identification. The proposed methodology has been applied to a continuous intensified reactor, the "Open Plate Reactor (OPR)", developed by Alfa Laval and studied at the Laboratory of Chemical Engineering of Toulouse. The different steps of the methodology are explained through its application to the carrying out of an exothermic reaction

    Optimal coordinate sensor placements for estimating mean and variance components of variation sources

    Get PDF
    In-process Optical Coordinate Measuring Machine (OCMM) offers the potential of diagnosing in a timely manner variation sources that are responsible for product quality defects. Such a sensor system can help manufacturers improve product quality and reduce process downtime. Effective use of sensory data in diagnosing variation sources depends on the optimal design of a sensor system, which is often known as the problem of sensor placements. This thesis addresses coordinate sensor placement in diagnosing dimensional variation sources in assembly processes. Sensitivity indices of detecting process mean and variance components are defined as the design criteria and are derived in terms of process layout and sensor deployment information. Exchange algorithms, originally developed in the research of optimal experiment deign, are employed and revised to maximize the detection sensitivity. A sort-and-cut procedure is used, which remarkably improve the algorithm efficiency of the current exchange routine. The resulting optimal sensor layouts and its implications are illustrated in the specific context of a panel assembly process

    Efficient Detection on Stochastic Faults in PLC Based Automated Assembly Systems With Novel Sensor Deployment and Diagnoser Design

    Get PDF
    In this dissertation, we proposed solutions on novel sensor deployment and diagnoser design to efficiently detect stochastic faults in PLC based automated systems First, a fuzzy quantitative graph based sensor deployment was called upon to model cause-effect relationship between faults and sensors. Analytic hierarchy process (AHP) was used to aggregate the heterogeneous properties between sensors and faults into single edge values in fuzzy graph, thus quantitatively determining the fault detectability. An appropriate multiple objective model was set up to minimize fault unobservability and cost while achieving required detectability performance. Lexicographical mixed integer linear programming and greedy search were respectively used to optimize the model, thus assigning the sensors to faults. Second, a diagnoser based on real time fuzzy Petri net (RTFPN) was proposed to detect faults in discrete manufacturing systems. It used the real time PN to model the manufacturing plant while using fuzzy PN to isolate the faults. It has the capability of handling uncertainties and including industry knowledge to diagnose faults. The proposed approach was implemented using Visual Basic, and tested as well as validated on a dual robot arm. Finally, the proposed sensor deployment approach and diagnoser were comprehensively evaluated based on design of experiment techniques. Two-stage statistical analysis including analysis of variance (ANOVA) and least significance difference (LSD) were conducted to evaluate the diagnosis performance including positive detection rate, false alarm, accuracy and detect delay. It illustrated the proposed approaches have better performance on those evaluation metrics. The major contributions of this research include the following aspects: (1) a novel fuzzy quantitative graph based sensor deployment approach handling sensor heterogeneity, and optimizing multiple objectives based on lexicographical integer linear programming and greedy algorithm, respectively. A case study on a five tank system showed that system detectability was improved from the approach of signed directed graph's 0.62 to the proposed approach's 0.70. The other case study on a dual robot arm also show improvement on system's detectability improved from the approach of signed directed graph's 0.61 to the proposed approach's 0.65. (2) A novel real time fuzzy Petri net diagnoser was used to remedy nonsynchronization and integrate useful but incomplete knowledge for diagnosis purpose. The third case study on a dual robot arm shows that the diagnoser can achieve a high detection accuracy of 93% and maximum detection delay of eight steps. (3) The comprehensive evaluation approach can be referenced by other diagnosis systems' design, optimization and evaluation

    Data fusion for system modeling, performance assessment and improvement

    Get PDF
    Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information. Although this data-rich environment provides great opportunity for more effective process control, it also raises new research challenges on data analysis and decision making due to the complex data structures, such as heterogeneous data dependency, and large-volume and high-dimensional characteristics. This thesis contributes to the area of System Informatics and Control (SIAC) to develop systematic data fusion methodologies for effective quality control and performance improvement in complex systems. These advanced methodologies enable (1) a better handling of the rich data environment communicated by complex engineering systems, (2) a closer monitoring of the system status, and (3) a more accurate forecasting of future trends and behaviors. The research bridges the gaps in methodologies among advanced statistics, engineering domain knowledge and operation research. It also forms close linkage to various application areas such as manufacturing, health care, energy and service systems. This thesis started from investigating the optimal sensor system design and conducting multiple sensor data fusion analysis for process monitoring and diagnosis in different applications. In Chapter 2, we first studied the couplings or interactions between the optimal design of a sensor system in a Bayesian Network and quality management of a manufacturing system, which can improve cost-effectiveness and production yield by considering sensor cost, process change detection speed, and fault diagnosis accuracy in an integrated manner. An algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) with optimality proof is developed to obtain the optimal sensor allocation design at minimum cost under different user specified detection requirements. Chapter 3 extended this line of research by proposing a novel adaptive sensor allocation framework, which can greatly improve the monitoring and diagnosis capabilities of the previous method. A max-min criterion is developed to manage sensor reallocation and process change detection in an integrated manner. The methodology was tested and validated based on a hot forming process and a cap alignment process. Next in Chapter 4, we proposed a Scalable-Robust-Efficient Adaptive (SERA) sensor allocation strategy for online high-dimensional process monitoring in a general network. A monitoring scheme of using the sum of top-r local detection statistics is developed, which is scalable, effective and robust in detecting a wide range of possible shifts in all directions. This research provides a generic guideline for practitioners on determining (1) the appropriate sensor layout; (2) the “ON” and “OFF” states of different sensors; and (3) which part of the acquired data should be transmitted to and analyzed at the fusion center, when only limited resources are available. To improve the accuracy of remaining lifetime prediction, Chapter 5 proposed a data-level fusion methodology for degradation modeling and prognostics. When multiple sensors are available to measure the degradation mechanism of a same system, it becomes a high dimensional and challenging problem to determine which sensors to use and how to combine them together for better data analysis. To address this issue, we first defined two essential properties if present in a degradation signal, can enhance the effectiveness for prognostics. Then, we proposed a generic data-level fusion algorithm to construct a composite health index to achieve those two identified properties. The methodology was tested using the degradation signals of aircraft gas turbine engine, which demonstrated a much better prognostic result compared to relying solely on the data from an individual sensor. In summary, this thesis is the research drawing attention to the area of data fusion for effective employment of the underlying data gathering capabilities for system modeling, performance assessment and improvement. The fundamental data fusion methodologies are developed and further applied to various applications, which can facilitate resources planning, real-time monitoring, diagnosis and prognostics.Ph.D

    Dynamic Modeling, Sensor Placement Design, and Fault Diagnosis of Nuclear Desalination Systems

    Get PDF
    Fault diagnosis of sensors, devices, and equipment is an important topic in the nuclear industry for effective and continuous operation of nuclear power plants. All the fault diagnostic approaches depend critically on the sensors that measure important process variables. Whenever a process encounters a fault, the effect of the fault is propagated to some or all the process variables. The ability of the sensor network to detect and isolate failure modes and anomalous conditions is crucial for the effectiveness of a fault detection and isolation (FDI) system. However, the emphasis of most fault diagnostic approaches found in the literature is primarily on the procedures for performing FDI using a given set of sensors. Little attention has been given to actual sensor allocation for achieving the efficient FDI performance. This dissertation presents a graph-based approach that serves as a solution for the optimization of sensor placement to ensure the observability of faults, as well as the fault resolution to a maximum possible extent. This would potentially facilitate an automated sensor allocation procedure. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data, and to fit a hyper-plane to the data. The fault directions for different fault scenarios are obtained from the prediction errors, and fault isolation is then accomplished using new projections on these fault directions. The effectiveness of the use of an optimal sensor set versus a reduced set for fault detection and isolation is demonstrated using this technique. Among a variety of desalination technologies, the multi-stage flash (MSF) processes contribute substantially to the desalinating capacity in the world. In this dissertation, both steady-state and dynamic simulation models of a MSF desalination plant are developed. The dynamic MSF model is coupled with a previously developed International Reactor Innovative and Secure (IRIS) model in the SIMULINK environment. The developed sensor placement design and fault diagnostic methods are illustrated with application to the coupled nuclear desalination system. The results demonstrate the effectiveness of the newly developed integrated approach to performance monitoring and fault diagnosis with optimized sensor placement for large industrial systems

    Architecting Networked Engineering Systems

    Get PDF
    The primary goal in this dissertation is to create a new knowledge, make a transformative influence in the design of networked engineering systems adaptable to ambitious market demands, and to accommodate the Industry 4.0 design principles based on the philosophy that design is fundamentally a decision making process. The principal motivation in this dissertation is to establish a computational framework that is suitable for the design of low-cost and high-quality networked engineering systems adaptable to ambitious market demands in the context of Industry 4.0. Dynamic and ambitious global market demands make it necessary for competitive enterprises to have low-cost manufacturing processes and high-quality products. Smart manufacturing is increasingly being adopted by companies to respond to changes in the market. These smart manufacturing systems must be adaptable to dynamic changes and respond to unexpected disturbances, and uncertainty. Accordingly, a decision-based design computational framework, Design for Dynamic Management (DFDM), is proposed as a support to flexible, operable and rapidly configurable manufacturing processes. DFDM has three critical components: adaptable and concurrent design, operability analysis and reconfiguration strategies. Adaptable and concurrent design methods offer flexibility in selection of design parameters and the concurrent design of the mechanical and control systems. Operability analysis is used to determine the functionality of the system undergoing dynamic change. Reconfiguration strategies allow multiple configurations of elements in the system. It is expected that proposed computational framework results in next generation of networked engineering systems, where tools and sensors communicate with each other via the Internet of Things (IoT), sensors data would be used to create enriched digital system models, adaptable to fast-changing market requirements, which can produce higher quality products over a longer lifetime and at a lower cost. The computational framework and models proposed in this dissertation are applicable in system design, and/or product-service system design. This dissertation is a fundamental research and a way forward is DFDM transition to the industry through decision-based design platform. Decision-based design platform is a step toward new frontiers, Cyber-Physical-Social System Design, Manufacturing, and Services, contributing to further digitization

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man
    corecore