4 research outputs found

    Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

    Get PDF
    This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process

    Fault detection, identification and economic impact assessment for a pressure leaching process

    Get PDF
    Thesis (MEng)--Stellenbosch University, 2017.ENGLISH SUMMARY: Modern chemical and metallurgical processes consist of numerous process units with several complex interactions existing between them. The increased process complexity has in turn amplified the effect of faulty process conditions on the overall process performance. Fault diagnosis forms a critical part of a process monitoring strategy and is crucial for improved process performance. The increased amount of process measurements readily available in modern process plants allows for more complex data-driven fault diagnosis methods. Linear and nonlinear feature extraction methods are popular multivariate fault diagnosis procedures employed in literature. However, these methods are yet to find wide spread industrial application. The multivariate fault diagnosis methods are not often evaluated on real-world modern chemical processes. The lack of real world application has in turn led to the absence of economic performance assessments evaluating the potential profitability of these fault diagnosis methods. The aim of this study is to design and investigate the performance of a fault diagnosis strategy with both traditional fault diagnosis performance metrics and an economic impact assessment (EIA). A complex dynamic process model of the pressure leach at a base metal refinery (BMR) was developed by Dorfling (2012). The model was recently updated by Miskin (2015), who included the actual process control layers present at the BMR. A fault library was developed, through consultation of expert knowledge from the BMR, and incorporated into the dynamic model by Miskin (2015). The pressure leach dynamic model will form the basis for the investigation. Principal component analysis (PCA) and kernel PCA (KPCA) were employed as feature extraction methods. Traditional and reconstruction based contributions were employed as fault identification methods. Economic Performance Functions (EPFs) were developed from expert knowledge from the plant. The fault diagnosis performance was evaluated through the traditional performance metrics and the EPFs. Both PCA and KPCA provided improved fault detection results when compared to a simple univariate method. PCA provided significantly improved detection results for five of the eight faults evaluated, when compared to univariate detection. Fault identification results suffered from significant fault smearing. The significant fault detection results did not translate into a significant economic benefit. The EIA proved the process to be robust against faults, when implementing a basic univariate fault detection approach. Recommendations were made for possible industrial application and future work focusing on EIAs, training data selection and fault smearing.AFRIKAANS OPSOMMING: Moderne chemiese- en metallurgiese-prosesse bestaan uit ʼn verskeidenheid proseseenhede met talle komplekse interaksies wat tussen die proseseenhede bestaan. Die toename in die komplekse interaksies versterk die effek van foutiewe prosesomstandighede op die algehele prosesverrigting. Die toename in die beskikbaarbaarheid van prosesmetings in moderne prosesse, laat meer komplekse datagedrewe fout-diagnostiese metodes toe. Lineêre en nie-lineêre kenmerk-ekstraksie metodes is gewilde meerveranderlike fout-diagnostiese prosedures wat in literatuur gebruik word. Dié metodes het egter nog nie ʼn algemene toepassing in die industrie gevind nie. Die meerveranderlike fout-diagnostiese metodes word egter nie gereeld op die werklik moderne chemiese-prosesse toegepas nie; die gebrek aan dié toepassings veroorsaak die afwesigheid van ekonomiese impakstudies wat die winsgewendheid van hierdie fout-diagnostiese metodes evalueer. Die doel van hierdie studie is om ‘n fout-diagnostiese strategie te ontwerp en om die werkverrigting te ondersoek met beide tradisionele fout-diagnostiese werkverrigtingstatistieke en ekonomiese impak assessering (EIA). ‘n Komplekse dinamiese prosesmodel van die drukloogproses by ‘n basismetaalraffinadery (BMR) is ontwikkel deur Dorfling (2012). Die model is onlangs deur Miskin (2015) opdateer wat die werklike BMR prosesbeheerstrategie geïmplementeer het. ‘n Biblioteek van foute is ontwikkel d.m.v. die konsultering met kundiges by die BMR en is suksesvol opgeneem in die dinamiese model deur Miskin (2015). Die dinamiese drukloogmodel vorm die basis van hierdie projek. Hoofkomponentanalise (HKA) en Kern-HKA (KHKA) is gebruik as metodes vir kenmerk-ekstraksie. Tradisionele- en rekonstruksie-gebaseerde bydraberekeninge is gebruik as fout-identifikasie metodes. Ekonomiese-verrigtingfunksies (EVF’s) is ontwikkel met die hulp van kundiges by die BMR. Die fout-diagnose werkverrigting is geëvalueer met beide tradisionele fout-diagnostiese werkverrigtingstatistieke en die EVF’s. Beide HKA en KHKA het verbeterde foutopsporings resultate gelewer in vergelyking met ‘n eenvoudige eenveranderlike metode. HKA het beduidende verbeterde foutopsporingsresultate vir vyf van die agt foute gelewer, in vergelyking met eenveranderlike foutopsporing. Fout-identifikasie resultate het aan beduidende fout smeer-effekte gely. Dié beduidende foutopsporings resultate het nie tot ‘n beduidende ekonomiese voordeel gelei nie. Die EIA het bewys dat die proses wel robuus is teen foute, wanneer ‘n basiese eenveranderlike foutopspring strategie gevolg word. Aanbevelings is gemaak vir moontlike industriële aanwending en toekomstige werk wat fokus op EIA’s, opleidingsdata-seleksie en foutsmeer-effek

    Deep Recurrent Neural Networks for Fault Detection and Classification

    Get PDF
    Deep Learning is one of the fastest growing research topics in process systems engineering due to the ability of deep learning models to represent and predict non-linear behavior in many applications. However, the application of these models in chemical engineering is still in its infancy. Thus, a key goal of this work is assessing the capabilities of deep-learning based models in a chemical engineering applications. The specific focus in the current work is detection and classification of faults in a large industrial plant involving several chemical unit operations. Towards this goal we compare the efficacy of a deep learning based algorithm to other state-of-the-art multivariate statistical based techniques for fault detection and classification. The comparison is conducted using simulated data from a chemical benchmark case study that has been often used to test fault detection algorithms, the Tennessee Eastman Process (TEP). A real time online scheme is proposed in the current work that enhances the detection and classifications of all the faults occurring in the simulation. This is accomplished by formulating a fault-detection model capable of describing the dynamic nonlinear relationships among the output variables and manipulated variables that can be measured in the Tennessee Eastman Process during the occurrence of faults or in the absence of them. In particular, we are focusing on specific faults that cannot be correctly detected and classified by traditional statistical methods nor by simpler Artificial Neural Networks (ANN). To increase the detectability of these faults, a deep Recurrent Neural Network (RNN) is programmed that uses dynamic information of the process along a pre-specified time horizon. In this research we first studied the effect of the number of samples feed into the RNN in order to capture more dynamical information of the faults and showed that accuracy increases with this number e.g. average classification rates were 79.8%, 80.3%, 81% and 84% for the RNN with 5, 15, 25 and 100 number of samples respectively. As well, to increase the classification accuracy of difficult to observe faults we developed a hierarchical structure where faults are grouped into subsets and classified with separate models for each subset. Also, to improve the classification for faults that resulted in responses with low signal to noise ratio excitation was added to the process through an implementation of a pseudo random signal(PRS). By applying the hierarchical structure there is an increment on the signal-to-noise ratio of faults 3 and 9, which translates in an improvement in the classification accuracy in both of these faults by 43.0% and 17.2% respectively for the case of 100 number of samples and by 8.7% and 23.4% for 25 number samples. On the other hand, applying a PRS to excite the system has showed a dramatic increase in the classification rates of the normal state to 88.7% and fault 15 up to 76.4%. Therefore, the proposed method is able to improve considerably both the detection and classification accuracy of several observable faults, as well as faults considered to be unobservable when using other detection algorithms. Overall, the comparison of the deep learning algorithms with Dynamic PCA (Principal Component Analysis) techniques showed a clear superiority of the deep learning techniques in classifying faults in nonlinear dynamic processes. Finally, we develop these same techniques to different operational modes of the TEP simulation, achieving comparable improvements to the classification accuracies

    Classification Algorithms based on Generalized Polynomial Chaos

    Get PDF
    Classification is one of the most important tasks in process system engineering. Since most of the classification algorithms are generally based on mathematical models, they inseparably involve the quantification and propagation of model uncertainty onto the variables used for classification. Such uncertainty may originate from either a lack of knowledge of the underlying process or from the intrinsic time varying phenomena such as unmeasured disturbances and noise. Often, model uncertainty has been modeled in a probabilistic way and Monte Carlo (MC) type sampling methods have been the method of choice for quantifying the effects of uncertainty. However, MC methods may be computationally prohibitive especially for nonlinear complex systems and systems involving many variables. Alternatively, stochastic spectral methods such as the generalized polynomial chaos (gPC) expansion have emerged as a promising technique that can be used for uncertainty quantification and propagation. Such methods can approximate the stochastic variables by a truncated gPC series where the coefficients of these series can be calculated by Galerkin projection with the mathematical models describing the process. Following these steps, the gPC expansion based methods can converge much faster to a solution than MC type sampling based methods. Using the gPC based uncertainty quantification and propagation method, this current project focuses on the following three problems: (i) fault detection and diagnosis (FDD) in the presence of stochastic faults entering the system; (ii) simultaneous optimal tuning of a FDD algorithm and a feedback controller to enhance the detectability of faults while mitigating the closed loop process variability; (iii) classification of apoptotic cells versus normal cells using morphological features identified from a stochastic image segmentation algorithm in combination with machine learning techniques. The algorithms developed in this work are shown to be highly efficient in terms of computational time, improved fault diagnosis and accurate classification of apoptotic versus normal cells
    corecore