1,899 research outputs found

    Active statistical process control

    Get PDF
    PhD ThesisMost Statistical Process Control (SPC) research has focused on the development of charting techniques for process monitoring. Unfortunately, little attention has been paid to the importance of bringing the process in control automatically via these charting techniques. This thesis shows that by drawing upon concepts from Automatic Process Control (APC), it is possible to devise schemes whereby the process is monitored and automatically controlled via SPC procedures. It is shown that Partial Correlation Analysis (PCorrA) or Principal Component Analysis (PCA) can be used to determine the variables that have to be monitored and manipulated as well as the corresponding control laws. We call this proposed procedure Active SPC and the capabilities of various strategies that arise are demonstrated by application to a simulated reaction process. Reactor product concentration was controlled using different manipulated input configurations e.g. manipulating all input variables, manipulating only two input variables, and manipulating only a single input variable. The last two manipulating schemes consider the cases when all input variables can be measured on-line but not all can be manipulated on-line. Different types of control charts are also tested with the new Active SPC method e.g. Shewhart chart with action limits; Shewhart chart with action and warning limits for individual observations, and lastly the Exponentially Weighted Moving Average control chart. The effects of calculating control limits on-line to accommodate possible changes in process characteristics were also studied. The results indicate that the use of the Exponentially Weighted Moving Average control chart, with limits calculated using Partial Correlations, showed the best promise for further development. It is also shown that this particular combination could provide better performance than the common Proportional Integral (PI) controller when manipulations incur costs.Commonwealth Scholarship Commission: British Council: Universiti Telcnologi, Malaysia

    Design of an event-based early warning system for process operations

    Get PDF
    This thesis proposes a new methodology to design an event-based warning system as an alternative to the conventional variable-based alarm system. This study initially explores the options for grouping process variables for alarm allocation. Several grouping methods are discussed and an event-based grouping procedure is detailed. Selection of the key variables for a group is performed considering the information that the variables contain to distinguish between an abnormal and a normal condition. The information theory is used to quantify the information content of a variable about an event to select the key variables. The cross-correlation analysis between pairs of key variables is used to identify the redundant variables. Simulation study using the model of a continuous stirred tank reactor (CSTR) is used to demonstrate the methodology. The proposed event-based early warning system utilizing online measurements is detailed in the thesis. In this approach, warnings are assigned to plant abnormal events instead of individual variables. To assess the likelihoods of undesirable events, the Bayesian Network is used; the event likelihoods are estimated in real time utilizing online measurements. Diagnostic analysis is conducted to identify root-causes of events. By assigning warning to events, the methodology results in significantly lower number of warnings compared to traditional variable-based warning (alarms) system. It also enables early warning of a possible event along with an efficient diagnosis of the root-causes of the event. Experimental testing using a level control system is presented to demonstrate the efficacy of the proposed method. Simulation study using the model of a CSTR is also presented to demonstrate the performance of the algorithm. Both, experimental and simulation studies, have shown promising results

    Application of data mining techniques of batch profiles for process understanding and improvement

    Get PDF
    Batch processes are widely used in the chemical industry. Recently, much attention has been given to the monitoring and analysis of batch measurement data, or profiles, with an emphasis on the detection of problems. Similarly, methods to improve the final product quality in batch processes have multiplied in the literature. However, an area that is virtually unexplored is the utilization of the data mining techniques for monitoring and analysis of batch profiles for better understanding batch processes, rather than identifying problems in batches, in order to improve the process. The thrust of this work is to apply a systematic method to increase batch process understanding by sifting through the existing historical database of past batches, to discern directions for process improvement from the increased understanding, and to subsequently demonstrate better quality control through the use of online recipe adjustments. A database of past batches is generated from a simulated nylon-6, 6 process, with the main quality variable of interest being the number average molecular weight. The time and measurement variability in raw batch measurement profiles is characterized through scale parameters. These scale parameters are subjected to a standard principal component analysis (PCA) to understand the principal sources of variation present in a historical database of past batches. Directions for process improvement are discovered from the data mining study and appropriate manipulated variables to implement recipe adjustments are identified. Online predictions of the molecular weight are demonstrated which indicate off-target quality batches well before the end of the batch. A split-range linear molecular weight-based controller is developed that is able to reduce the variability in the quality around the target. Further process improvement is accomplished by reducing the cycle time in addition to tightly controlling the final quality. The approach for systematically analyzing batch process data is general and can be applied to any batch system, including non-reactive systems

    Process Monitoring and Data Mining with Chemical Process Historical Databases

    Get PDF
    Modern chemical plants have distributed control systems (DCS) that handle normal operations and quality control. However, the DCS cannot compensate for fault events such as fouling or equipment failures. When faults occur, human operators must rapidly assess the situation, determine causes, and take corrective action, a challenging task further complicated by the sheer number of sensors. This information overload as well as measurement noise can hide information critical to diagnosing and fixing faults. Process monitoring algorithms can highlight key trends in data and detect faults faster, reducing or even preventing the damage that faults can cause. This research improves tools for process monitoring on different chemical processes. Previously successful monitoring methods based on statistics can fail on non-linear processes and processes with multiple operating states. To address these challenges, we develop a process monitoring technique based on multiple self-organizing maps (MSOM) and apply it in industrial case studies including a simulated plant and a batch reactor. We also use standard SOM to detect a novel event in a separation tower and produce contribution plots which help isolate the causes of the event. Another key challenge to any engineer designing a process monitoring system is that implementing most algorithms requires data organized into “normal” and “faulty”; however, data from faulty operations can be difficult to locate in databases storing months or years of operations. To assist in identifying faulty data, we apply data mining algorithms from computer science and compare how they cluster chemical process data from normal and faulty conditions. We identify several techniques which successfully duplicated normal and faulty labels from expert knowledge and introduce a process data mining software tool to make analysis simpler for practitioners. The research in this dissertation enhances chemical process monitoring tasks. MSOM-based process monitoring improves upon standard process monitoring algorithms in fault identification and diagnosis tasks. The data mining research reduces a crucial barrier to the implementation of monitoring algorithms. The enhanced monitoring introduced can help engineers develop effective and scalable process monitoring systems to improve plant safety and reduce losses from fault events

    An investigation on automatic systems for fault diagnosis in chemical processes

    Get PDF
    Plant safety is the most important concern of chemical industries. Process faults can cause economic loses as well as human and environmental damages. Most of the operational faults are normally considered in the process design phase by applying methodologies such as Hazard and Operability Analysis (HAZOP). However, it should be expected that failures may occur in an operating plant. For this reason, it is of paramount importance that plant operators can promptly detect and diagnose such faults in order to take the appropriate corrective actions. In addition, preventive maintenance needs to be considered in order to increase plant safety. Fault diagnosis has been faced with both analytic and data-based models and using several techniques and algorithms. However, there is not yet a general fault diagnosis framework that joins detection and diagnosis of faults, either registered or non-registered in records. Even more, less efforts have been focused to automate and implement the reported approaches in real practice. According to this background, this thesis proposes a general framework for data-driven Fault Detection and Diagnosis (FDD), applicable and susceptible to be automated in any industrial scenario in order to hold the plant safety. Thus, the main requirement for constructing this system is the existence of historical process data. In this sense, promising methods imported from the Machine Learning field are introduced as fault diagnosis methods. The learning algorithms, used as diagnosis methods, have proved to be capable to diagnose not only the modeled faults, but also novel faults. Furthermore, Risk-Based Maintenance (RBM) techniques, widely used in petrochemical industry, are proposed to be applied as part of the preventive maintenance in all industry sectors. The proposed FDD system together with an appropriate preventive maintenance program would represent a potential plant safety program to be implemented. Thus, chapter one presents a general introduction to the thesis topic, as well as the motivation and scope. Then, chapter two reviews the state of the art of the related fields. Fault detection and diagnosis methods found in literature are reviewed. In this sense a taxonomy that joins both Artificial Intelligence (AI) and Process Systems Engineering (PSE) classifications is proposed. The fault diagnosis assessment with performance indices is also reviewed. Moreover, it is exposed the state of the art corresponding to Risk Analysis (RA) as a tool for taking corrective actions to faults and the Maintenance Management for the preventive actions. Finally, the benchmark case studies against which FDD research is commonly validated are examined in this chapter. The second part of the thesis, integrated by chapters three to six, addresses the methods applied during the research work. Chapter three deals with the data pre-processing, chapter four with the feature processing stage and chapter five with the diagnosis algorithms. On the other hand, chapter six introduces the Risk-Based Maintenance techniques for addressing the plant preventive maintenance. The third part includes chapter seven, which constitutes the core of the thesis. In this chapter the proposed general FD system is outlined, divided in three steps: diagnosis model construction, model validation and on-line application. This scheme includes a fault detection module and an Anomaly Detection (AD) methodology for the detection of novel faults. Furthermore, several approaches are derived from this general scheme for continuous and batch processes. The fourth part of the thesis presents the validation of the approaches. Specifically, chapter eight presents the validation of the proposed approaches in continuous processes and chapter nine the validation of batch process approaches. Chapter ten raises the AD methodology in real scaled batch processes. First, the methodology is applied to a lab heat exchanger and then it is applied to a Photo-Fenton pilot plant, which corroborates its potential and success in real practice. Finally, the fifth part, including chapter eleven, is dedicated to stress the final conclusions and the main contributions of the thesis. Also, the scientific production achieved during the research period is listed and prospects on further work are envisaged.La seguridad de planta es el problema más inquietante para las industrias químicas. Un fallo en planta puede causar pérdidas económicas y daños humanos y al medio ambiente. La mayoría de los fallos operacionales son previstos en la etapa de diseño de un proceso mediante la aplicación de técnicas de Análisis de Riesgos y de Operabilidad (HAZOP). Sin embargo, existe la probabilidad de que pueda originarse un fallo en una planta en operación. Por esta razón, es de suma importancia que una planta pueda detectar y diagnosticar fallos en el proceso y tomar las medidas correctoras adecuadas para mitigar los efectos del fallo y evitar lamentables consecuencias. Es entonces también importante el mantenimiento preventivo para aumentar la seguridad y prevenir la ocurrencia de fallos. La diagnosis de fallos ha sido abordada tanto con modelos analíticos como con modelos basados en datos y usando varios tipos de técnicas y algoritmos. Sin embargo, hasta ahora no existe la propuesta de un sistema general de seguridad en planta que combine detección y diagnosis de fallos ya sea registrados o no registrados anteriormente. Menos aún se han reportado metodologías que puedan ser automatizadas e implementadas en la práctica real. Con la finalidad de abordar el problema de la seguridad en plantas químicas, esta tesis propone un sistema general para la detección y diagnosis de fallos capaz de implementarse de forma automatizada en cualquier industria. El principal requerimiento para la construcción de este sistema es la existencia de datos históricos de planta sin previo filtrado. En este sentido, diferentes métodos basados en datos son aplicados como métodos de diagnosis de fallos, principalmente aquellos importados del campo de “Aprendizaje Automático”. Estas técnicas de aprendizaje han resultado ser capaces de detectar y diagnosticar no sólo los fallos modelados o “aprendidos”, sino también nuevos fallos no incluidos en los modelos de diagnosis. Aunado a esto, algunas técnicas de mantenimiento basadas en riesgo (RBM) que son ampliamente usadas en la industria petroquímica, son también propuestas para su aplicación en el resto de sectores industriales como parte del mantenimiento preventivo. En conclusión, se propone implementar en un futuro no lejano un programa general de seguridad de planta que incluya el sistema de detección y diagnosis de fallos propuesto junto con un adecuado programa de mantenimiento preventivo. Desglosando el contenido de la tesis, el capítulo uno presenta una introducción general al tema de esta tesis, así como también la motivación generada para su desarrollo y el alcance delimitado. El capítulo dos expone el estado del arte de las áreas relacionadas al tema de tesis. De esta forma, los métodos de detección y diagnosis de fallos encontrados en la literatura son examinados en este capítulo. Asimismo, se propone una taxonomía de los métodos de diagnosis que unifica las clasificaciones propuestas en el área de Inteligencia Artificial y de Ingeniería de procesos. En consecuencia, se examina también la evaluación del performance de los métodos de diagnosis en la literatura. Además, en este capítulo se revisa y reporta el estado del arte correspondiente al “Análisis de Riesgos” y a la “Gestión del Mantenimiento” como técnicas complementarias para la toma de medidas correctoras y preventivas. Por último se abordan los casos de estudio considerados como puntos de referencia en el campo de investigación para la aplicación del sistema propuesto. La tercera parte incluye el capítulo siete, el cual constituye el corazón de la tesis. En este capítulo se presenta el esquema o sistema general de diagnosis de fallos propuesto. El sistema es dividido en tres partes: construcción de los modelos de diagnosis, validación de los modelos y aplicación on-line. Además incluye un modulo de detección de fallos previo a la diagnosis y una metodología de detección de anomalías para la detección de nuevos fallos. Por último, de este sistema se desglosan varias metodologías para procesos continuos y por lote. La cuarta parte de esta tesis presenta la validación de las metodologías propuestas. Específicamente, el capítulo ocho presenta la validación de las metodologías propuestas para su aplicación en procesos continuos y el capítulo nueve presenta la validación de las metodologías correspondientes a los procesos por lote. El capítulo diez valida la metodología de detección de anomalías en procesos por lote reales. Primero es aplicada a un intercambiador de calor escala laboratorio y después su aplicación es escalada a un proceso Foto-Fenton de planta piloto, lo cual corrobora el potencial y éxito de la metodología en la práctica real. Finalmente, la quinta parte de esta tesis, compuesta por el capítulo once, es dedicada a presentar y reafirmar las conclusiones finales y las principales contribuciones de la tesis. Además, se plantean las líneas de investigación futuras y se lista el trabajo desarrollado y presentado durante el periodo de investigación

    Latent variable modeling approaches to assist the implementation of quality-by-design paradigms in pharmaceutical development and manufacturing

    Get PDF
    With the introduction of the Quality-by-Design (QbD) initiative, the American Food and Drug Administration and the other pharmaceutical regulatory Agencies aimed to change the traditional approaches to pharmaceutical development and manufacturing. Pharmaceutical companies have been encouraged to use systematic and science-based tools for the design and control of their processes, in order to demonstrate a full understanding of the driving forces acting on them. From an engineering perspective, this initiative can be seen as the need to apply modeling tools in pharmaceutical development and manufacturing activities. The aim of this Dissertation is to show how statistical modeling, and in particular latent variable models (LVMs), can be used to assist the practical implementation of QbD paradigms to streamline and accelerate product and process design activities in pharmaceutical industries, and to provide a better understanding and control of pharmaceutical manufacturing processes. Three main research areas are explored, wherein LVMs can be applied to support the practical implementation of the QbD paradigms: process understanding, product and process design, and process monitoring and control. General methodologies are proposed to guide the use of LVMs in different applications, and their effectiveness is demonstrated by applying them to industrial, laboratory and simulated case studies. With respect to process understanding, a general methodology for the use of LVMs is proposed to aid the development of continuous manufacturing systems. The methodology is tested on an industrial process for the continuous manufacturing of tablets. It is shown how LVMs can model jointly data referred to different raw materials and different units in the production line, allowing to understand which are the most important driving forces in each unit and which are the most critical units in the line. Results demonstrate how raw materials and process parameters impact on the intermediate and final product quality, enabling to identify paths along which the process moves depending on its settings. This provides a tool to assist quality risk assessment activities and to develop the control strategy for the process. In the area of product and process design, a general framework is proposed for the use of LVM inversion to support the development of new products and processes. The objective of model inversion is to estimate the best set of inputs (e.g., raw material properties, process parameters) that ensure a desired set of outputs (e.g., product quality attributes). Since the inversion of an LVM may have infinite solutions, generating the so-called null space, an optimization framework allowing to assign the most suitable objectives and constraints is used to select the optimal solution. The effectiveness of the framework is demonstrated in an industrial particle engineering problem to design the raw material properties that are needed to produce granules with desired characteristics from a high-shear wet granulation process. Results show how the framework can be used to design experiments for new products design. The analogy between the null space and the Agencies’ definition of design space is also demonstrated and a strategy to estimate the uncertainties in the design and in the null space determination is provided. The proposed framework for LVM inversion is also applied to assist the design of the formulation for a new product, namely the selection of the best excipient type and amount to mix with a given active pharmaceutical ingredient (API) to obtain a blend of desired properties. The optimization framework is extended to include constraints on the material selection, the API dose or the final tablet weight. A user-friendly interface is developed to aid formulators in providing the constraints and objectives of the problem. Experiments performed industrially on the formulation designed in-silico confirm that model predictions are in good agreement with the experimental values. LVM inversion is shown to be useful also to address product transfer problems, namely the problem of transferring the manufacturing of a product from a source plant, wherein most of the experimentation has been carried out, to a target plant which may differ for size, lay-out or involved units. An experimental process for pharmaceutical nanoparticles production is used as a test bed. An LVM built on different plant data is inverted to estimate the most suitable process conditions in a target plant to produce nanoparticles of desired mean size. Experiments designed on the basis of the proposed LVM inversion procedure demonstrate that the desired nanoparticles sizes are obtained, within experimental uncertainty. Furthermore, the null space concept is validated experimentally. Finally, with respect to the process monitoring and control area, the problem of transferring monitoring models between different plants is studied. The objective is to monitor a process in a target plant where the production is being started (e.g., a production plant) by exploiting the data available from a source plant (e.g., a pilot plant). A general framework is proposed to use LVMs to solve this problem. Several scenarios are identified on the basis of the available information, of the source of data and on the type of variables to include in the model. Data from the different plants are related through subsets of variables (common variables) measured in both plants, or through plant-independent variables obtained from conservation balances (e.g., dimensionless numbers). The framework is applied to define the process monitoring model for an industrial large-scale spray-drying process, using data available from a pilot-scale process. The effectiveness of the transfer is evaluated in terms of monitoring performances in the detection of a real fault occurring in the target process. The proposed methodologies are then extended to batch systems, considering a simulated penicillin fermentation process. In both cases, results demonstrate that the transfer of knowledge from the source plant enables better monitoring performances than considering only the data available from the target plant

    Deep Recurrent Neural Networks for Fault Detection and Classification

    Get PDF
    Deep Learning is one of the fastest growing research topics in process systems engineering due to the ability of deep learning models to represent and predict non-linear behavior in many applications. However, the application of these models in chemical engineering is still in its infancy. Thus, a key goal of this work is assessing the capabilities of deep-learning based models in a chemical engineering applications. The specific focus in the current work is detection and classification of faults in a large industrial plant involving several chemical unit operations. Towards this goal we compare the efficacy of a deep learning based algorithm to other state-of-the-art multivariate statistical based techniques for fault detection and classification. The comparison is conducted using simulated data from a chemical benchmark case study that has been often used to test fault detection algorithms, the Tennessee Eastman Process (TEP). A real time online scheme is proposed in the current work that enhances the detection and classifications of all the faults occurring in the simulation. This is accomplished by formulating a fault-detection model capable of describing the dynamic nonlinear relationships among the output variables and manipulated variables that can be measured in the Tennessee Eastman Process during the occurrence of faults or in the absence of them. In particular, we are focusing on specific faults that cannot be correctly detected and classified by traditional statistical methods nor by simpler Artificial Neural Networks (ANN). To increase the detectability of these faults, a deep Recurrent Neural Network (RNN) is programmed that uses dynamic information of the process along a pre-specified time horizon. In this research we first studied the effect of the number of samples feed into the RNN in order to capture more dynamical information of the faults and showed that accuracy increases with this number e.g. average classification rates were 79.8%, 80.3%, 81% and 84% for the RNN with 5, 15, 25 and 100 number of samples respectively. As well, to increase the classification accuracy of difficult to observe faults we developed a hierarchical structure where faults are grouped into subsets and classified with separate models for each subset. Also, to improve the classification for faults that resulted in responses with low signal to noise ratio excitation was added to the process through an implementation of a pseudo random signal(PRS). By applying the hierarchical structure there is an increment on the signal-to-noise ratio of faults 3 and 9, which translates in an improvement in the classification accuracy in both of these faults by 43.0% and 17.2% respectively for the case of 100 number of samples and by 8.7% and 23.4% for 25 number samples. On the other hand, applying a PRS to excite the system has showed a dramatic increase in the classification rates of the normal state to 88.7% and fault 15 up to 76.4%. Therefore, the proposed method is able to improve considerably both the detection and classification accuracy of several observable faults, as well as faults considered to be unobservable when using other detection algorithms. Overall, the comparison of the deep learning algorithms with Dynamic PCA (Principal Component Analysis) techniques showed a clear superiority of the deep learning techniques in classifying faults in nonlinear dynamic processes. Finally, we develop these same techniques to different operational modes of the TEP simulation, achieving comparable improvements to the classification accuracies
    corecore