11 research outputs found

    Detection of Intermittent Oscillation in Process Control Loops with Semi-Supervised Learning

    Get PDF
    Oscillations in the control loops indicate the poor performance of the control loops. The occurrence of oscillations in the process control loop is quite high in the industry, so it needs to be reduced so that the control loop can work properly. The first step for oscillation reduction is oscillation detection. One type of oscillation that is difficult to detect is intermittent oscillation. The smart factory concept encourages the development of the intermittent oscillation detection system using machine learning by being implemented online. Therefore, in this study an online intermittent oscillation detection program is built using K-nearest neighbor (KNN)-based Semi-supervised learning (SSL) method. The SSL method applied is self-training. The training data was obtained by a simulation of the Tennessee Eastman Process. The data is segmented based on window size and extracted time series features. The extracted data is used to build a model to detect oscillations caused by stiction, tuning errors, and external disturbances in the reactor. The model is implemented online with sliding windows and MQTT. The best accuracy and F1-score of the model obtained are 96.15% and 95.15%. In online detection, the model detects the type of oscillation with an average time of 305 seconds

    Process Data Analytics Using Deep Learning Techniques

    Get PDF
    In chemical manufacturing plants, numerous types of data are accessible, which could be process operational data (historical or real-time), process design and product quality data, economic and environmental (including process safety, waste emission and health impact) data. Effective knowledge extraction from raw data has always been a very challenging task, especially the data needed for a type of study is huge. Other characteristics of process data such as noise, dynamics, and highly correlated process parameters make this more challenging. In this study, we introduce an attention-based RNN for multi-step-ahead prediction that can have applications in model predictive control, fault diagnosis, etc. This model consists of an RNN that encodes a sequence of input time series data into a new representation (called context vector) and another RNN that decodes the representation into output target sequence. An attention model integrated to the encoder-decoder RNN model allows the network to focus on parts of the input sequence that are relevant to predicting the target sequence. The attention model is jointly trained with all other components of the model. By having a deep architecture, the model can learn a very complex dynamic system, and it is robust to noise. In order to show the effectiveness of the proposed approach, we perform a comparative study on the problem of catalyst activity prediction, by using conventional machine learning techniques such as Support Vector Regression (SVR)

    Explainability: Relevance based Dynamic Deep Learning Algorithm for Fault Detection and Diagnosis in Chemical Processes

    Full text link
    The focus of this work is on Statistical Process Control (SPC) of a manufacturing process based on available measurements. Two important applications of SPC in industrial settings are fault detection and diagnosis (FDD). In this work a deep learning (DL) based methodology is proposed for FDD. We investigate the application of an explainability concept to enhance the FDD accuracy of a deep neural network model trained with a data set of relatively small number of samples. The explainability is quantified by a novel relevance measure of input variables that is calculated from a Layerwise Relevance Propagation (LRP) algorithm. It is shown that the relevances can be used to discard redundant input feature vectors/ variables iteratively thus resulting in reduced over-fitting of noisy data, increasing distinguishability between output classes and superior FDD test accuracy. The efficacy of the proposed method is demonstrated on the benchmark Tennessee Eastman Process.Comment: Under Review. arXiv admin note: text overlap with arXiv:2012.0386

    SensorSCAN: Self-Supervised Learning and Deep Clustering for Fault Diagnosis in Chemical Processes

    Full text link
    Modern industrial facilities generate large volumes of raw sensor data during the production process. This data is used to monitor and control the processes and can be analyzed to detect and predict process abnormalities. Typically, the data has to be annotated by experts in order to be used in predictive modeling. However, manual annotation of large amounts of data can be difficult in industrial settings. In this paper, we propose SensorSCAN, a novel method for unsupervised fault detection and diagnosis, designed for industrial chemical process monitoring. We demonstrate our model's performance on two publicly available datasets of the Tennessee Eastman Process with various faults. The results show that our method significantly outperforms existing approaches (+0.2-0.3 TPR for a fixed FPR) and effectively detects most of the process faults without expert annotation. Moreover, we show that the model fine-tuned on a small fraction of labeled data nearly reaches the performance of a SOTA model trained on the full dataset. We also demonstrate that our method is suitable for real-world applications where the number of faults is not known in advance. The code is available at https://github.com/AIRI-Institute/sensorscan

    Intelligent Condition Monitoring of Industrial Plants: An Overview of Methodologies and Uncertainty Management Strategies

    Full text link
    Condition monitoring plays a significant role in the safety and reliability of modern industrial systems. Artificial intelligence (AI) approaches are gaining attention from academia and industry as a growing subject in industrial applications and as a powerful way of identifying faults. This paper provides an overview of intelligent condition monitoring and fault detection and diagnosis methods for industrial plants with a focus on the open-source benchmark Tennessee Eastman Process (TEP). In this survey, the most popular and state-of-the-art deep learning (DL) and machine learning (ML) algorithms for industrial plant condition monitoring, fault detection, and diagnosis are summarized and the advantages and disadvantages of each algorithm are studied. Challenges like imbalanced data, unlabelled samples and how deep learning models can handle them are also covered. Finally, a comparison of the accuracies and specifications of different algorithms utilizing the Tennessee Eastman Process (TEP) is conducted. This research will be beneficial for both researchers who are new to the field and experts, as it covers the literature on condition monitoring and state-of-the-art methods alongside the challenges and possible solutions to them

    Mejora de la calidad de un proceso mediante la detección de anomalías basada en datos

    Get PDF
    En este trabajo se exponen diferentes técnicas relacionadas con la calidad y la monitorización de los procesos industriales. El uso de estas técnicas para la detección y diagnóstico de fallos (FDD) basadas en datos se nutren del avance de la industria y la tecnología que permiten una recogida de información de los procesos a gran escala. La nueva Industria 4.0, el big data, la utilización masiva de sensores y el control distribuido en planta permiten la aplicación de estas técnicas. Primero, para el control estadístico de procesos se aplica la técnica de Análisis de Componentes Principales (PCA) que nos permite detectar el estado de funcionamiento en un proceso industrial, y saber si su comportamiento es normal o existen fallos o anomalías mediante técnicas estadísticas. A continuación, se pretende diagnosticar el fallo mediante técnicas de aprendizaje automático aplicando redes neuronales. El creciente auge de la inteligencia artificial permite el entrenamiento de algoritmos que son capaces de identificar situaciones anómalas gracias a la experiencia adquirida. Las técnicas desarrolladas se aplican a la planta química que propone el benchmark Tennessee Eastman Process (TEP) obtenido de la literatura científica, donde se simula el funcionamiento de la planta en 21 tipos de fallos diferentes con los datos obtenidos para su funcionamiento normal y en estado de fallo. Se llevan a cabo simulaciones para las diferentes técnicas desarrolladas para los diferentes fallos, y se comparan los resultados obtenidos. Finalmente se realiza un breve estudio del trabajo futuro que se podría llevar a cabo para mejorar este trabajo.This work studies different techniques about quality and monitoring industrial process. The application of these data-based Fault Detection and Diagnosis (FDD) techniques are fueled by advancements in industry and technology that allow large-scale process information gathering. The new Industry 4.0, the big data, the massive use of sensors and distributed control in the plant allow the use of these techniques. First, for the statistical control of processes, the Principal Component Analysis (PCA) technique is applied, which allows to detect the state normal or failure in an industrial process using statistical techniques. Next, it’s intended to diagnose the failures using machine learning techniques applying neural networks. The growing rise of artificial intelligence allows the train of algorithms able to identify anomalous situations thanks to the experience acquired. The techniques developed are applied to the chemical plant proposed by the Tennessee Eastman Process (TEP) benchmark obtained from the science literature, where the operation of the plant is simulated in 21 different types of faults with data from normal operation and in state of failure. Simulations are carried out for the different techniques developed for the different failures, and the results obtained are compared. Finally, a brief study is made about the future work that could be carried out to improve this work.Departamento de Ingeniería de Sistemas y AutomáticaGrado en Ingeniería en Organización Industria

    A Novel Approach to Reservoir Simulation Using Supervised Learning

    Get PDF
    Numerical reservoir simulation has been a fundamental tool in field development and planning. It has been used to replicate reservoir performance and study the effects of different field conditions in various reservoir management scenarios, and during field development and planning. Consequently, physics-based simulations have been heavily used during various reservoir studies such as history matching, uncertainty quantification and production optimisation; grid size and geological complexity also have a significant influence on the speed of the simulation. Furthermore, heterogeneities such as natural or hydraulic fractures can cause convergence problems and make the simulation even more time-consuming and computationally expensive. Due to being computationally demanding, such studies are also extremely time intensive. As a result of this downside, it is practically impossible to follow workflows such as the closed-loop reservoir management approach, which recommends updating the model every time a set of new data is available. Additionally, any management scenario must be approached from a business and economic standpoint. This means that, based on the predefined objectives within the study, the respective layers of precision must be chosen by the user. Therefore, if less expensive techniques can be implemented and provide adequate results, the use of more accurate and costly methods cannot be justified. One popular solution in overcoming this problem involves the creation of an approximate proxy model for the required features of the desired reservoir. This is achieved by either replacing or combining the physics-based model with this approximate model. However, by following this approach, the designed proxy model is only able to represent its corresponding reservoir, with a new proxy model needed to be rebuilt from scratch for any new reservoir. With consideration to the overall runtime, it can be observed that the time taken in iteratively running a numerical reservoir simulation may be faster than the time taken by the entire process of building, validating and using a proxy model. Therefore, this thesis focuses on the feasibility, advantages and contribution of a complete stand-alone AI-based simulator, Deep Net Simulator (DNS), in a wide range of different conventional and tight sand reservoir scenarios in 1D, 2D and 3D space. This thesis involves the use of deep learning to create a data-driven simulator, Deep Net Simulator (DNS), that enables the simulation of a wide range of reservoirs. Unlike conventional proxy approaches, a large amount of data is collected from multiple reservoirs with varying configurations and complexities. This results in the creation of a comprehensive database, including various possible reservoirs’ features and scenarios. The hypothesis is that such an approach will enable the data driven model to perceive and understand the principles that make up reservoir modelling and that the model will act as an excellent approximation to the equations that traditional physics-based numerical simulators solve. This objective is highly possible, since deep learning has been shown to be a great universal function estimator, which is capable of estimating the physics once given enough data and observations. Hence, this thesis aims to develop a series of data-driven models with the aforementioned features for various types of reservoirs. Initially, a workflow is designed to integrate a commercial simulator with a data extraction algorithm, enabling the generation of input-output simulation datasets. Next, the datasets are generated and reviewed. These datasets are then used in the training, validating and testing of the developed models. These developed data-driven models are able to learn and reproduce the physics governing fluid flow for a range of different scenarios: a single-phase oil reservoir in one-dimensional space, a single-phase gas reservoir in two-dimensional space, a single-phase gas reservoir in three-dimensional space, and hydraulically fractured tight gas reservoirs in two-dimensional space. The developed model was evaluated in terms of precision, speed, and reliability. For each scenario, the developed model was compared with a commercial reservoir simulator, and its performance was assessed using the following metrics: mean absolute error, mean absolute percentage error (MAPE), mean relative error, mean square error, root mean square error and r squared. The developed model was able to predict 45%, 70% and 90% of the cases with less than 5%, 10% and 15% MAPE, respectively. Furthermore, depending on the number of cells requiring outputs, the developed model was able to reduce runtime by 100% up to 1.04E+08%. This thesis takes the first steps towards establishing a new approach using AI and deep learning, for reservoir management procedure that is cheaper, less computationally demanding and more adaptable. This approach may result in a better value creation alongside a quicker decision-making process and, possibly, the advantage of integrating other attributes and data that are currently not used in physics-based models.Thesis (Ph.D.) -- University of Adelaide, Australian School of Petroleum and Energy Resources, 202
    corecore