238 research outputs found

    Methods and Systems for Fault Diagnosis in Nuclear Power Plants

    Get PDF
    This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed. A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model. A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system. A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF. For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system. To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP. The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research. In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified

    Process Monitoring and Data Mining with Chemical Process Historical Databases

    Get PDF
    Modern chemical plants have distributed control systems (DCS) that handle normal operations and quality control. However, the DCS cannot compensate for fault events such as fouling or equipment failures. When faults occur, human operators must rapidly assess the situation, determine causes, and take corrective action, a challenging task further complicated by the sheer number of sensors. This information overload as well as measurement noise can hide information critical to diagnosing and fixing faults. Process monitoring algorithms can highlight key trends in data and detect faults faster, reducing or even preventing the damage that faults can cause. This research improves tools for process monitoring on different chemical processes. Previously successful monitoring methods based on statistics can fail on non-linear processes and processes with multiple operating states. To address these challenges, we develop a process monitoring technique based on multiple self-organizing maps (MSOM) and apply it in industrial case studies including a simulated plant and a batch reactor. We also use standard SOM to detect a novel event in a separation tower and produce contribution plots which help isolate the causes of the event. Another key challenge to any engineer designing a process monitoring system is that implementing most algorithms requires data organized into ā€œnormalā€ and ā€œfaultyā€; however, data from faulty operations can be difficult to locate in databases storing months or years of operations. To assist in identifying faulty data, we apply data mining algorithms from computer science and compare how they cluster chemical process data from normal and faulty conditions. We identify several techniques which successfully duplicated normal and faulty labels from expert knowledge and introduce a process data mining software tool to make analysis simpler for practitioners. The research in this dissertation enhances chemical process monitoring tasks. MSOM-based process monitoring improves upon standard process monitoring algorithms in fault identification and diagnosis tasks. The data mining research reduces a crucial barrier to the implementation of monitoring algorithms. The enhanced monitoring introduced can help engineers develop effective and scalable process monitoring systems to improve plant safety and reduce losses from fault events

    Multivariate statistical process monitoring

    Get PDF
    U industrijskoj proizvodnji prisutan je stalni rast zahtjeva, u prvom redu, u pogledu ekonomičnosti proizvodnje, kvalitete proizvoda, stupnja sigurnosti i zaÅ”tite okoliÅ”a. Put ka ispunjenju ovih zahtjeva vodi kroz uvođenje sve složenijih sustava automatskog upravljanja, Å”to ima za posljedicu mjerenje sve većeg broja procesnih veličina i sve složenije mjerne sustave. Osnova za kvalitetno vođenje procesa je kvalitetno i pouzdano mjerenje procesnih veličina. Kvar na procesnoj opremi može značajno naruÅ”iti proizvodni proces, pa čak prouzrokovati ispad proizvodnje Å”to rezultira visokim dodatnim troÅ”kovima. U ovom radu se analizira način automatskog otkrivanja kvara i identifikacije mjesta kvara u procesnoj mjernoj opremi, tj. senzorima. U ovom smislu mogu poslužiti različite statističke metode kojima se analiziraju podaci koji pristižu iz mjernog sustava. U radu se PCA i ICA metode koriste za modeliranje odnosa među procesnim veličinama, dok se za otkrivanje nastanka kvara koriste Hotellingova (T**2), I**2 i Q (SPE) statistike jer omogućuju otkrivanje neobičnih varijabilnosti unutar i izvan normalnog radnog područja procesa. Za identifikaciju mjesta (uzroka) kvara koriste se dijagrami doprinosa. Izvedeni algoritmi statističkog nadzora procesa temeljeni na PCA metodi i ICA metodi primijenjeni su na dva procesa različite složenosti te je uspoređena njihova sposobnost otkrivanja kvara.Demands regarding production efficiency, product quality, safety levels and environment protection are continuously increasing in the process industry. The way to accomplish these demands is to introduce ever more complex automatic control systems which require more process variables to be measured and more advanced measurement systems. Quality and reliable measurements of process variables are the basis for the quality process control. Process equipment failures can significantly deteriorate production process and even cause production outage, resulting in high additional costs. This paper analyzes automatic fault detection and identification of process measurement equipment, i.e. sensors. Different statistical methods can be used for this purpose in a way that continuously acquired measurements are analyzed by these methods. In this paper, PCA and ICA methods are used for relationship modelling which exists between process variables while Hotelling\u27s (T**2), I**2 and Q (SPE) statistics are used for fault detection because they provide an indication of unusual variability within and outside normal process workspace. Contribution plots are used for fault identification. The algorithms for the statistical process monitoring based on PCA and ICA methods are derived and applied to the two processes of different complexity. Apart from that, their fault detection ability is mutually compared

    Effect of sensor set size on polymer electrolyte membrane fuel cell fault diagnosis

    Get PDF
    Ā© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This paper presents a comparative study on the performance of different sizes of sensor sets on polymer electrolyte membrane (PEM) fuel cell fault diagnosis. The effectiveness of three sizes of sensor sets, including fuel cell voltage only, all the available sensors, and selected optimal sensors in detecting and isolating fuel cell faults (e.g., cell flooding and membrane dehydration) are investigated using the test data from a PEM fuel cell system. Wavelet packet transform and kernel principal component analysis are employed to reduce the dimensions of the dataset and extract features for state classification. Results demonstrate that the selected optimal sensors can provide the best diagnostic performance, where different fuel cell faults can be detected and isolated with good quality

    Process Monitoring Using Data-Based Fault Detection Techniques: Comparative Studies

    Get PDF
    Data based monitoring methods are often utilized to carry out fault detection (FD) when process models may not necessarily be available. The partial least square (PLS) and principle component analysis (PCA) are two basic types of multivariate FD methods, however, both of them can only be used to monitor linear processes. Among these extended data based methods, the kernel PCA (KPCA) and kernel PLS (KPLS) are the most well-known and widely adopted. KPCA and KPLS models have several advantages, since, they do not require nonlinear optimization, and only the solution of an eigenvalue problem is required. Also, they provide a better understanding of what kind of nonlinear features are extracted: the number of the principal components (PCs) in a feature space is fixed a priori by selecting the appropriate kernel function. Therefore, the objective of this work is to use KPCA and KPLS techniques to monitor nonlinear data. The improved FD performance of KPCA and KPLS is illustrated through two simulated examples, one using synthetic data and the other using simulated continuously stirred tank reactor (CSTR) data. The results demonstrate that both KPCA and KPLS methods are able to provide better detection compared to the linear versions

    Kernel-based fault diagnosis of inertial sensors using analytical redundancy

    Get PDF
    Kernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults ā€“ a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain ā€“ the parity space technique ā€“ and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution

    Fault Detection via Occupation Kernel Principal Component Analysis

    Full text link
    The reliable operation of automatic systems is heavily dependent on the ability to detect faults in the underlying dynamical system. While traditional model-based methods have been widely used for fault detection, data-driven approaches have garnered increasing attention due to their ease of deployment and minimal need for expert knowledge. In this paper, we present a novel principal component analysis (PCA) method that uses occupation kernels. Occupation kernels result in feature maps that are tailored to the measured data, have inherent noise-robustness due to the use of integration, and can utilize irregularly sampled system trajectories of variable lengths for PCA. The occupation kernel PCA method is used to develop a reconstruction error approach to fault detection and its efficacy is validated using numerical simulations

    An improved mixture of probabilistic PCA for nonlinear data-driven process monitoring

    Get PDF
    An improved mixture of probabilistic principal component analysis (PPCA) has been introduced for nonlinear data-driven process monitoring in this paper. To realize this purpose, the technique of a mixture of probabilistic principal component analyzers is utilized to establish the model of the underlying nonlinear process with local PPCA models, where a novel composite monitoring statistic is proposed based on the integration of two monitoring statistics in modified PPCA-based fault detection approach. Besides, the weighted mean of the monitoring statistics aforementioned is utilized as a metrics to detect potential abnormalities. The virtues of the proposed algorithm are discussed in comparison with several unsupervised algorithms. Finally, Tennessee Eastman process and an autosuspension model are employed to demonstrate the effectiveness of the proposed scheme further

    An Adaptive Nonparametric Modeling Technique for Expanded Condition Monitoring of Processes

    Get PDF
    New reactor designs and the license extensions of the current reactors has created new condition monitoring challenges. A major challenge is the creation of a data-based model for a reactor that has never been built or operated and has no historical data. This is the motivation behind the creation of a hybrid modeling technique based on first principle models that adapts to include operating reactor data as it becomes available. An Adaptive Non-Parametric Model (ANPM) was developed for adaptive monitoring of small to medium size reactors (SMR) but would be applicable to all designs. Ideally, an adaptive model should have the ability to adapt to new operational conditions while maintaining the ability to differentiate faults from nominal conditions. This has been achieved by focusing on two main abilities. The first ability is to adjust the model to adapt from simulated conditions to actual operating conditions, and the second ability is to adapt to expanded operating conditions. In each case the system will not learn new conditions which represent faulted or degraded operations. The ANPM architecture is used to adapt the model\u27s memory matrix from data from a First Principle Model (FPM) to data from actual system operation. This produces a more accurate model with the capability to adjust to system fluctuations. This newly developed adaptive modeling technique was tested with two pilot applications. The first application was a heat exchanger model that was simulated in both a low and high fidelity method in SIMULINK. The ANPM was applied to the heat exchanger and improved the monitoring performance over a first principle model by increasing the model accuracy from an average MSE of 0.1451 to 0.0028 over the range of operation. The second pilot application was a flow loop built at the University of Tennessee and simulated in SIMULINK. An improvement in monitoring system performance was observed with the accuracy of the model improving from an average MSE of 0.302 to an MSE of 0.013 over the adaptation range of operation. This research focused on the theory, development, and testing of the ANPM and the corresponding elements in the surveillance system
    • ā€¦
    corecore