970 research outputs found

    A Bond Graph Modeling for Health Monitoring and Diagnosis of the Tennessee Eastman Process

    Get PDF
    Data-driven fault detection and diagnosis approaches are widely applicable in many real-time practical applications. Among these applications, the industrial benchmark of Tennessee Eastman Process (TEP) is widely used to illustrate and compare control and monitoring studies. However, due to the complexity of physical phenomena occurring in such process, no model-based approach for fault diagnosis has been developed and most of the diagnosis approaches applied to the TEP are based on experiences and qualitative reasoning that exploit the massive amount of available measurement data. In this paper, we propose to use the Bond Graph formalism as a multidisciplinary energetic approach that enables to obtain a graphical nonlinear model of the TEP not only for simulation purposes but also for monitoring tasks by generating formal fault indicators. In this study, the proposed BG model is validated from the experiment data and the problem of the TEP model design is hence overcome. A Bond Graph Modeling for Health Monitoring and Diagnosis of the Tennessee Eastman Process (PDF Download Available). Available from: https://www.researchgate.net/publication/314032904_A_Bond_Graph_Modeling... [accessed May 30, 2017]

    Plant-Wide Diagnosis: Cause-and-Effect Analysis Using Process Connectivity and Directionality Information

    Get PDF
    Production plants used in modern process industry must produce products that meet stringent environmental, quality and profitability constraints. In such integrated plants, non-linearity and strong process dynamic interactions among process units complicate root-cause diagnosis of plant-wide disturbances because disturbances may propagate to units at some distance away from the primary source of the upset. Similarly, implemented advanced process control strategies, backup and recovery systems, use of recycle streams and heat integration may hamper detection and diagnostic efforts. It is important to track down the root-cause of a plant-wide disturbance because once corrective action is taken at the source, secondary propagated effects can be quickly eliminated with minimum effort and reduced down time with the resultant positive impact on process efficiency, productivity and profitability. In order to diagnose the root-cause of disturbances that manifest plant-wide, it is crucial to incorporate and utilize knowledge about the overall process topology or interrelated physical structure of the plant, such as is contained in Piping and Instrumentation Diagrams (P&IDs). Traditionally, process control engineers have intuitively referred to the physical structure of the plant by visual inspection and manual tracing of fault propagation paths within the process structures, such as the process drawings on printed P&IDs, in order to make logical conclusions based on the results from data-driven analysis. This manual approach, however, is prone to various sources of errors and can quickly become complicated in real processes. The aim of this thesis, therefore, is to establish innovative techniques for the electronic capture and manipulation of process schematic information from large plants such as refineries in order to provide an automated means of diagnosing plant-wide performance problems. This report also describes the design and implementation of a computer application program that integrates: (i) process connectivity and directionality information from intelligent P&IDs (ii) results from data-driven cause-and-effect analysis of process measurements and (iii) process know-how to aid process control engineers and plant operators gain process insight. This work explored process intelligent P&IDs, created with AVEVA® P&ID, a Computer Aided Design (CAD) tool, and exported as an ISO 15926 compliant platform and vendor independent text-based XML description of the plant. The XML output was processed by a software tool developed in Microsoft® .NET environment in this research project to computationally generate connectivity matrix that shows plant items and their connections. The connectivity matrix produced can be exported to Excel® spreadsheet application as a basis for other application and has served as precursor to other research work. The final version of the developed software tool links statistical results of cause-and-effect analysis of process data with the connectivity matrix to simplify and gain insights into the cause and effect analysis using the connectivity information. Process knowhow and understanding is incorporated to generate logical conclusions. The thesis presents a case study in an atmospheric crude heating unit as an illustrative example to drive home key concepts and also describes an industrial case study involving refinery operations. In the industrial case study, in addition to confirming the root-cause candidate, the developed software tool was set the task to determine the physical sequence of fault propagation path within the plant. This was then compared with the hypothesis about disturbance propagation sequence generated by pure data-driven method. The results show a high degree of overlap which helps to validate statistical data-driven technique and easily identify any spurious results from the data-driven multivariable analysis. This significantly increase control engineers confidence in data-driven method being used for root-cause diagnosis. The thesis concludes with a discussion of the approach and presents ideas for further development of the methods

    Plantwide simulation and monitoring of offshore oil and gas production facility

    Get PDF
    Monitoring is one of the major concerns in offshore oil and gas production platform since the access to the offshore facilities is difficult. Also, it is quite challenging to extract oil and gas safely in such a harsh environment, and any abnormalities may lead to a catastrophic event. The process data, including all possible faulty scenarios, is required to build an appropriate monitoring system. Since the plant wide process data is not available in the literature, a dynamic model and simulation of an offshore oil and gas production platform is developed by using Aspen HYSYS. Modeling and simulations are handy tools for designing and predicting the accurate behavior of a production plant. The model was built based on the gas processing plant at the North Sea platform reported in Voldsund et al. (2013). Several common faults from different fault categories were simulated in the dynamic system, and their impacts on the overall hydrocarbon production were analyzed. The simulated data are then used to build a monitoring system for each of the faulty states. A new monitoring method has been proposed by combining Principal Component Analysis (PCA) and Dynamic PCA (DPCA) with Artificial Neural Network (ANN). The application of ANN to process systems is quite difficult as it involves a very large number of input neurons to model the system. Training of such large scale network is time-consuming and provides poor accuracy with a high error rate. In PCA-ANN and DPCA-ANN monitoring system, PCA and DPCA are used to reduce the dimension of the training data set and extract the main features of measured variables. Subsequently ANN uses this lower-dimensional score vectors to build a training model and classify the abnormalities. It is found that the proposed approach reduces the time to train ANN and successfully diagnose, detects and classifies the faults with a high accuracy rate

    Consequence Estimation and Root Cause Diagnosis of Rare Events in Chemical Process Industry

    Get PDF
    In chemical process industries (CPIs), rare events are low-frequency high-consequence events caused by process disturbances (i.e., root causes). To alleviate the impact of rare events, it is crucial to understand their effects through consequence estimation and provide an efficient troubleshooting advice through root cause diagnosis. For these analyses, traditional data-driven methods cannot be used due to a lack of database for low-frequency rare events. This entails the use of a first-principle method or a Bayesian network (BN)-based probabilistic model. However, both of these models are computationally expensive due to solving coupled differential equations and the presence of a high number of process variables in CPIs. Additionally, although probabilistic models deal with data scarcity, they do not account for source-to-source variability in data and the presence of cyclic loops that are prevalent in CPIs because of various control loops and process variable couplings. Unaccountability of these factors results in inaccurate root cause diagnosis. To handle these challenges, we first focus on developing computationally efficient models for consequence estimation of rare events. Specifically, we use reduced-order modeling techniques to construct a computationally efficient model for consequence estimation of rare events. Further, for computational efficiency in root cause diagnosis, we identify key process variables (KPVs) using a sequential combination of information gain and Pearson correlation coefficient. Additionally, we use the KPVs with a Hierarchical Bayesian model that considers rare events from different sources, and hence, accounts for source-to-source variability in data. After achieving computational efficiency, we focus on improving the diagnosis accuracy. Since existing BN-based probabilistic models cannot account for cyclic loops in CPIs due to the acyclic nature of BN, we design a modified BN which converts the weakest causal relation of a cyclic loop into a temporal relation, thereby decomposing the network into an acyclic one over time horizon. Next, to discover significant cyclic loops in BN, we develop a direct transfer entropy (DTE)-based methodology to learn BN. Since the key to discover cyclic loops is finding correct causality between process variables, DTE quantifies the causality effectively by accounting for the effects of their common source variables

    Towards system-level prognostics : Modeling, uncertainty propagation and system remaining useful life prediction

    Get PDF
    Prognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time

    Failure Diagnosis and Prognosis of Safety Critical Systems: Applications in Aerospace Industries

    Get PDF
    Many safety-critical systems such as aircraft, space crafts, and large power plants are required to operate in a reliable and efficient working condition without any performance degradation. As a result, fault diagnosis and prognosis (FDP) is a research topic of great interest in these systems. FDP systems attempt to use historical and current data of a system, which are collected from various measurements to detect faults, diagnose the types of possible failures, predict and manage failures in advance. This thesis deals with FDP of safety-critical systems. For this purpose, two critical systems including a multifunctional spoiler (MFS) and hydro-control value system are considered, and some challenging issues from the FDP are investigated. This research work consists of three general directions, i.e., monitoring, failure diagnosis, and prognosis. The proposed FDP methods are based on data-driven and model-based approaches. The main aim of the data-driven methods is to utilize measurement data from the system and forecast the remaining useful life (RUL) of the faulty components accurately and efficiently. In this regard, two dierent methods are developed. A modular FDP method based on a divide and conquer strategy is presented for the MFS system. The modular structure contains three components:1) fault diagnosis unit, 2) failure parameter estimation unit and 3) RUL unit. The fault diagnosis unit identifies types of faults based on an integration of neural network (NN) method and discrete wavelet transform (DWT) technique. Failure parameter estimation unit observes the failure parameter via a distributed neural network. Afterward, the RUL of the system is predicted by an adaptive Bayesian method. In another work, an innovative data-driven FDP method is developed for hydro-control valve systems. The idea is to use redundancy in multi-sensor data information and enhance the performance of the FDP system. Therefore, a combination of a feature selection method and support vector machine (SVM) method is applied to select proper sensors for monitoring of the hydro-valve system and isolate types of fault. Then, adaptive neuro-fuzzy inference systems (ANFIS) method is used to estimate the failure path. Similarly, an online Bayesian algorithm is implemented for forecasting RUL. Model-based methods employ high-delity physics-based model of a system for prognosis task. In this thesis, a novel model-based approach based on an integrated extended Kalman lter (EKF) and Bayesian method is introduced for the MFS system. To monitor the MFS system, a residual estimation method using EKF is performed to capture the progress of the failure. Later, a transformation is utilized to obtain a new measure to estimate the degradation path (DP). Moreover, the recursive Bayesian algorithm is invoked to predict the RUL. Finally, relative accuracy (RA) measure is utilized to assess the performance of the proposed methods

    Application of Deep Learning in Chemical Processes: Explainability, Monitoring and Observability

    Get PDF
    The last decade has seen remarkable advances in speech, image, and language recognition tools that have been made available to the public through computer and mobile devices’ applications. Most of these significant improvements were achieved by Artificial Intelligence (AI)/ deep learning (DL) algorithms (Hinton et al., 2006) that generally refers to a set of novel neural network architectures and algorithms such as long-short term memory (LSTM) units, convolutional networks (CNN), autoencoders (AE), t-distributed stochastic embedding (TSNE), etc. Although neural networks are not new, due to a combination of relatively novel improvements in methods for training the networks and the availability of increasingly powerful computers, one can now model much more complex nonlinear dynamic behaviour by using complex structures of neurons, i.e. more layers of neurons, than ever before (Goodfellow et al., 2016). However, it is recognized that the training of neural nets of such complex structures requires a vast amount of data. In this sense manufacturing processes are good candidates for deep learning applications since they utilize computers and information systems for monitoring and control thus generating a massive amount of data. This is especially true in pharmaceutical companies such as Sanofi Pasteur, the industrial collaborator for the current study, where large data sets are routinely stored for monitoring and regulatory purposes. Although novel DL algorithms have been applied with great success in image analysis, speech recognition, and language translation, their applications to chemical processes and pharmaceutical processes, in particular, are scarce. The current work deals with the investigation of deep learning in process systems engineering for three main areas of application: (i) Developing a deep learning classification model for profit-based operating regions. (ii) Developing both supervised and unsupervised process monitoring algorithms. (iii) Observability Analysis It is recognized that most empirical or black-box models, including DL models, have good generalization capabilities but are difficult to interpret. For example, using these methods it is difficult to understand how a particular decision is made, which input variable/feature is greatly influencing the decision made by the DL models etc. This understanding is expected to shed light on why biased results can be obtained or why a wrong class is predicted with a higher probability in classification problems. Hence, a key goal of the current work is on deriving process insights from DL models. To this end, the work proposes both supervised and unsupervised learning approaches to identify regions of process inputs that result in corresponding regions, i.e. ranges of values, of process profit. Furthermore, it will be shown that the ability to better interpret the model by identifying inputs that are most informative can be used to reduce over-fitting. To this end, a neural network (NN) pruning algorithm is developed that provides important physical insights on the system regarding the inputs that have positive and negative effect on profit function and to detect significant changes in process phenomenon. It is shown that pruning of input variables significantly reduces the number of parameters to be estimated and improves the classification test accuracy for both case studies: the Tennessee Eastman Process (TEP) and an industrial vaccine manufacturing process. The ability to store a large amount of data has permitted the use of deep learning (DL) and optimization algorithms for the process industries. In order to meet high levels of product quality, efficiency, and reliability, a process monitoring system is needed. The two aspects of Statistical Process Control (SPC) are fault detection and diagnosis (FDD). Many multivariate statistical methods like PCA and PLS and their dynamic variants have been extensively used for FD. However, the inherent non-linearities in the process pose challenges while using these linear models. Numerous deep learning FDD approaches have also been developed in the literature. However, the contribution plots for identifying the root cause of the fault have not been derived from Deep Neural Networks (DNNs). To this end, the supervised fault detection problem in the current work is formulated as a binary classification problem while the supervised fault diagnosis problem is formulated as a multi-class classification problem to identify the type of fault. Then, the application of the concept of explainability of DNNs is explored with its particular application in FDD problem. The developed methodology is demonstrated on TEP with non-incipient faults. Incipient faults are faulty conditions where signal to noise ratio is small and have not been widely studied in the literature. To address the same, a hierarchical dynamic deep learning algorithm is developed specifically to address the issue of fault detection and diagnosis of incipient faults. One of the major drawbacks of both the methods described above is the availability of labeled data i.e. normal operation and faulty operation data. From an industrial point of view, most data in an industrial setting, especially for biochemical processes, is obtained during normal operation and faulty data may not be available or may be insufficient. Hence, we also develop an unsupervised DL approach for process monitoring. It involves a novel objective function and a NN architecture that is tailored to detect the faults effectively. The idea is to learn the distribution of normal operation data to differentiate among the fault conditions. In order to demonstrate the advantages of the proposed methodology for fault detection, systematic comparisons are conducted with Multiway Principal Component Analysis (MPCA) and Multiway Partial Least Squares (MPLS) on an industrial scale Penicillin Simulator. Past investigations reported that the variability in productivity in the Sanofi's Pertussis Vaccine Manufacturing process may be highly correlated to biological phenomena, i.e. oxidative stresses, that are not routinely monitored by the company. While the company monitors and stores a large amount of fermentation data it may not be sufficiently informative about the underlying phenomena affecting the level of productivity. Furthermore, since the addition of new sensors in pharmaceutical processes requires extensive and expensive validation and certification procedures, it is very important to assess the potential ability of a sensor to observe relevant phenomena before its actual adoption in the manufacturing environment. This motivates the study of the observability of the phenomena from available data. An algorithm is proposed to check the observability for the classification task from the observed data (measurements). The proposed methodology makes use of a Supervised AE to reduce the dimensionality of the inputs. Thereafter, a criterion on the distance between the samples is used to calculate the percentage of overlap between the defined classes. The proposed algorithm is tested on the benchmark Tennessee Eastman process and then applied to the industrial vaccine manufacturing process

    Space benefits: The secondary application of aerospace technology in other sectors of the economy

    Get PDF
    A 'Benefit Briefing Notebook' was prepared for the NASA Technology Utilization Office to provide accurate, convenient, and integrated resource information on the transfer of aerospace technology to other sectors of the U.S. economy. The contents are divided into three sections: (1) transfer overview, (2) benefit cases, and (3) indexes. The transfer overview section provides a general perspective for technology transfer from NASA to other organizations. In addition to a description of the basic transfer modes, the selection criteria for notebook examples and the kinds of benefit data they contain are also presented. The benefits section is subdivided into nineteen subject areas. Each subsection presents one or more key issues of current interest, with discrete transfer cases related to each key issue. Additional transfer examples relevant to each subject area are then presented. Pertinent transfer data are given at the end of each example

    Space Benefits: The secondary application of aerospace technology in other sectors of the economy

    Get PDF
    Some 585 examples of the beneficial use of NASA aerospace technology by public and private organizations are described to demonstrate the effects of mission-oriented programs on technological progress in the United States. General observations regarding technology transfer activity are presented. Benefit cases are listed in 20 categories along with pertinent information such as communication link with NASA; the DRI transfer example file number; and individual case numbers associated with the technology and examples used; and the date of the latest contract with user organizations. Subject, organization, geographic, and field center indexes are included

    Benefits briefing notebook: The secondary application of aerospace technology in other sectors of the economy

    Get PDF
    Resource information on the transfer of aerospace technology to other sectors of the U.S. economy is presented. The contents of this notebook are divided into three sections: (1) benefit cases, (2) transfer overview, and (3) indexes. Transfer examples relevant to each subject area are presented. Pertinent transfer data are given. The Transfer Overview section provides a general perspective for technology transfer from NASA to other organizations. In addition to a description of the basic transfer modes, the selection criteria for notebook examples and the kinds of benefit data they contain are also presented
    corecore