247 research outputs found

    Malliprediktiivinen säädin Tennessee Eastman prosessille

    Get PDF
    This thesis aims to design a multivariable Model Predictive Control (MPC) scheme for a complex industrial process. The focus of the thesis is on the implementation and testing of a linear MPC control strategy combined with fault detection and diagnosis methods. The studied control methodology is based on a linear time invariant state-space model and the quadratic programming optimization procedure. The control scheme is realized as a supervisory one, where the MPC is used to calculate the optimal set point trajectories for the lower level PI controllers, thus aiming to decrease the fluctuations in the end product flows. The Tennessee Eastman (TE) process is used as the testing environment. The TE process is a benchmark based on a real process modified for testing. It has five units, four reactants, an inert, two products and a byproduct. The control objective is to maintain the production rate and the product quality at the desired level. To achieve this, the MPC implemented in this thesis gives setpoints to three stabilizing PI control loops around the reactor and the product stripper. The performance of the designed control systems is evaluated by inducing process disturbances, setpoint changes, and faults for two operational regimes. The obtained results show the efficiency of the adopted approach in handling disturbances and flexibility in control of different operational regimes without the need of retuning. To suppress the effects caused by faults, an additional level that provides fault detection and controller reconfiguration should be developed as further research.Tämän diplomityön tavoite on suunnitella monimuuttujainen-malliprediktiivinen säädin (MPC) teolliselle prosessille. Diplomityö keskittyy toteuttamaan ja testaamaan lineaarisen MPC strategian, joka yhdistettynä vikojen havainnointiin ja tunnistukseen sekä uudelleen konfigurointiin voidaan laajentaa vikasietoiseksi. Tutkittu säätöstrategia perustuu lineaariseen ajan suhteen muuttumattomaan tilataso-malliin ja neliöllisen ohjelmoinnin optimointimenetelmään. Säätö on toteutettu nk. ylemmän tason järjestelmänä, eli MPC:tä käytetään laskemaan optimaaliset asetusarvot alemman säätötason PI säätimille, tavoitteena vähentää vaihtelua lopputuotteen virroissa. Tennessee Eastman (TE) prosessia käytetään testiympäristönä. TE on testiprosessi, joka perustuu todelliseen teollisuuden prosessiin ja jota on muokattu testauskäyttöön sopivaksi. Prosessissa on viisi yksikköä, neljä lähtöainetta, inertti, kaksi tuotetta ja yksi sivutuote. Säätötavoite on ylläpitää haluttu taso tuotannon määrässä ja laadussa. Tämän saavuttamiseksi tässä diplomityössä toteutettu MPC antaa asetusarvoja kolmelle stabiloivalle PI-säätimelle reaktorin ja stripperin hallinnassa. Säätösysteemin suorituskykyä arvioitiin aiheuttamalla prosessiin häiriöitä, asetusarvon muutoksia ja vikoja eri operatiivisissa olosuhteissa. Saavutetut tulokset osoittavat valitun menetelmän tehokkuuden häiriöiden käsittelyyn ja joustavaan säätöön eri olosuhteissa. Tutkimuksen jatkokehityksenä vikojen vaikutuksen vaimentamiseksi säätöön tulisi lisätä taso, joka havaitsee viat ja uudelleen konfiguroi säätimen sen mukaisesti

    Nonlinear data driven techniques for process monitoring

    Get PDF
    The goal of this research is to develop process monitoring technology capable of taking advantage of the large stores of data accumulating in modern chemical plants. There is demand for new techniques for the monitoring of non-linear topology and behavior, and this research presents a topological preservation method for process monitoring using Self Organizing Maps (SOM). The novel architecture presented adapts SOM to a full spectrum of process monitoring tasks including fault detection, fault identification, fault diagnosis, and soft sensing. The key innovation of the new technique is its use of multiple SOM (MSOM) in the data modeling process as well as the use of a Gaussian Mixture Model (GMM) to model the probability density function of classes of data. For comparison, a linear process monitoring technique based on Principal Component Analysis (PCA) is also used to demonstrate the improvements SOM offers. Data for the computational experiments was generated using a simulation of the Tennessee Eastman process (TEP) created in Simulink by (Ricker 1996). Previous studies focus on step changes from normal operations, but this work adds operating regimes with time dependent dynamics not previously considered with a SOM. Results show that MSOM improves upon both linear PCA as well as the standard SOM technique using one map for fault diagnosis, and also shows a superior ability to isolate which variables in the data are responsible for the faulty condition. With respect to soft sensing, SOM and MSOM modeled the compositions equally well, showing that no information was lost in dividing the map representation of process data. Future research will attempt to validate the technique on a real chemical process

    Simple and efficient moving horizon estimation based on the fast gradient method

    Get PDF
    By now many results with respect to the fast and efficient implementation of model predictive control exist. However, for moving horizon estimation, only a few results are available. We present a simple solution algorithm tailored to moving horizon estimation of linear, discrete time systems. In a first step the problem is reformulated such that only the states remain as optimization variables, i.e. process and measurement noise are eliminated from the optimization problem. This reformulation enables the use of the fast gradient method, which has recently received a lot of attention for the solution of model predictive control problems. In contrast to the model predictive control case, the Hessian matrix is time- varying in moving horizon estimation, due to the time-varying nature of the arrival cost. Therefore, we outline a tailored method to compute online the lower and upper eigenvalues of the Hessian matrix required by the here considered fast gradient method. In addition, we discuss stopping criteria and various implementation details. An example illustrates the efficiency of the proposed algorithm

    Advanced and novel modeling techniques for simulation, optimization and monitoring chemical engineering tasks with refinery and petrochemical unit applications

    Get PDF
    Engineers predict, optimize, and monitor processes to improve safety and profitability. Models automate these tasks and determine precise solutions. This research studies and applies advanced and novel modeling techniques to automate and aid engineering decision-making. Advancements in computational ability have improved modeling software’s ability to mimic industrial problems. Simulations are increasingly used to explore new operating regimes and design new processes. In this work, we present a methodology for creating structured mathematical models, useful tips to simplify models, and a novel repair method to improve convergence by populating quality initial conditions for the simulation’s solver. A crude oil refinery application is presented including simulation, simplification tips, and the repair strategy implementation. A crude oil scheduling problem is also presented which can be integrated with production unit models. Recently, stochastic global optimization (SGO) has shown to have success of finding global optima to complex nonlinear processes. When performing SGO on simulations, model convergence can become an issue. The computational load can be decreased by 1) simplifying the model and 2) finding a synergy between the model solver repair strategy and optimization routine by using the initial conditions formulated as points to perturb the neighborhood being searched. Here, a simplifying technique to merging the crude oil scheduling problem and the vertically integrated online refinery production optimization is demonstrated. To optimize the refinery production a stochastic global optimization technique is employed. Process monitoring has been vastly enhanced through a data-driven modeling technique Principle Component Analysis. As opposed to first-principle models, which make assumptions about the structure of the model describing the process, data-driven techniques make no assumptions about the underlying relationships. Data-driven techniques search for a projection that displays data into a space easier to analyze. Feature extraction techniques, commonly dimensionality reduction techniques, have been explored fervidly to better capture nonlinear relationships. These techniques can extend data-driven modeling’s process-monitoring use to nonlinear processes. Here, we employ a novel nonlinear process-monitoring scheme, which utilizes Self-Organizing Maps. The novel techniques and implementation methodology are applied and implemented to a publically studied Tennessee Eastman Process and an industrial polymerization unit

    Process Monitoring and Data Mining with Chemical Process Historical Databases

    Get PDF
    Modern chemical plants have distributed control systems (DCS) that handle normal operations and quality control. However, the DCS cannot compensate for fault events such as fouling or equipment failures. When faults occur, human operators must rapidly assess the situation, determine causes, and take corrective action, a challenging task further complicated by the sheer number of sensors. This information overload as well as measurement noise can hide information critical to diagnosing and fixing faults. Process monitoring algorithms can highlight key trends in data and detect faults faster, reducing or even preventing the damage that faults can cause. This research improves tools for process monitoring on different chemical processes. Previously successful monitoring methods based on statistics can fail on non-linear processes and processes with multiple operating states. To address these challenges, we develop a process monitoring technique based on multiple self-organizing maps (MSOM) and apply it in industrial case studies including a simulated plant and a batch reactor. We also use standard SOM to detect a novel event in a separation tower and produce contribution plots which help isolate the causes of the event. Another key challenge to any engineer designing a process monitoring system is that implementing most algorithms requires data organized into “normal” and “faulty”; however, data from faulty operations can be difficult to locate in databases storing months or years of operations. To assist in identifying faulty data, we apply data mining algorithms from computer science and compare how they cluster chemical process data from normal and faulty conditions. We identify several techniques which successfully duplicated normal and faulty labels from expert knowledge and introduce a process data mining software tool to make analysis simpler for practitioners. The research in this dissertation enhances chemical process monitoring tasks. MSOM-based process monitoring improves upon standard process monitoring algorithms in fault identification and diagnosis tasks. The data mining research reduces a crucial barrier to the implementation of monitoring algorithms. The enhanced monitoring introduced can help engineers develop effective and scalable process monitoring systems to improve plant safety and reduce losses from fault events

    Data driven methods for updating fault detection and diagnosis system in chemical processes

    Get PDF
    Modern industrial processes are becoming more complex, and consequently monitoring them has become a challenging task. Fault Detection and Diagnosis (FDD) as a key element of process monitoring, needs to be investigated because of its essential role in decision making processes. Among available FDD methods, data driven approaches are currently receiving increasing attention because of their relative simplicity in implementation. Regardless of FDD types, one of the main traits of reliable FDD systems is their ability of being updated while new conditions that were not considered at their initial training appear in the process. These new conditions would emerge either gradually or abruptly, but they have the same level of importance as in both cases they lead to FDD poor performance. For addressing updating tasks, some methods have been proposed, but mainly not in research area of chemical engineering. They could be categorized to those that are dedicated to managing Concept Drift (CD) (that appear gradually), and those that deal with novel classes (that appear abruptly). The available methods, mainly, in addition to the lack of clear strategies for updating, suffer from performance weaknesses and inefficient required time of training, as reported. Accordingly, this thesis is mainly dedicated to data driven FDD updating in chemical processes. The proposed schemes for handling novel classes of faults are based on unsupervised methods, while for coping with CD both supervised and unsupervised updating frameworks have been investigated. Furthermore, for enhancing the functionality of FDD systems, some major methods of data processing, including imputation of missing values, feature selection, and feature extension have been investigated. The suggested algorithms and frameworks for FDD updating have been evaluated through different benchmarks and scenarios. As a part of the results, the suggested algorithms for supervised handling CD surpass the performance of the traditional incremental learning in regard to MGM score (defined dimensionless score based on weighted F1 score and training time) even up to 50% improvement. This improvement is achieved by proposed algorithms that detect and forget redundant information as well as properly adjusting the data window for timely updating and retraining the fault detection system. Moreover, the proposed unsupervised FDD updating framework for dealing with novel faults in static and dynamic process conditions achieves up to 90% in terms of the NPP score (defined dimensionless score based on number of the correct predicted class of samples). This result relies on an innovative framework that is able to assign samples either to new classes or to available classes by exploiting one class classification techniques and clustering approaches.Los procesos industriales modernos son cada vez más complejos y, en consecuencia, su control se ha convertido en una tarea desafiante. La detección y el diagnóstico de fallos (FDD), como un elemento clave de la supervisión del proceso, deben ser investigados debido a su papel esencial en los procesos de toma de decisiones. Entre los métodos disponibles de FDD, los enfoques basados en datos están recibiendo una atención creciente debido a su relativa simplicidad en la implementación. Independientemente de los tipos de FDD, una de las principales características de los sistemas FDD confiables es su capacidad de actualización, mientras que las nuevas condiciones que no fueron consideradas en su entrenamiento inicial, ahora aparecen en el proceso. Estas nuevas condiciones pueden surgir de forma gradual o abrupta, pero tienen el mismo nivel de importancia ya que en ambos casos conducen al bajo rendimiento de FDD. Para abordar las tareas de actualización, se han propuesto algunos métodos, pero no mayoritariamente en el área de investigación de la ingeniería química. Podrían ser categorizados en los que están dedicados a manejar Concept Drift (CD) (que aparecen gradualmente), y a los que tratan con clases nuevas (que aparecen abruptamente). Los métodos disponibles, además de la falta de estrategias claras para la actualización, sufren debilidades en su funcionamiento y de un tiempo de capacitación ineficiente, como se ha referenciado. En consecuencia, esta tesis está dedicada principalmente a la actualización de FDD impulsada por datos en procesos químicos. Los esquemas propuestos para manejar nuevas clases de fallos se basan en métodos no supervisados, mientras que para hacer frente a la CD se han investigado los marcos de actualización supervisados y no supervisados. Además, para mejorar la funcionalidad de los sistemas FDD, se han investigado algunos de los principales métodos de procesamiento de datos, incluida la imputación de valores perdidos, la selección de características y la extensión de características. Los algoritmos y marcos sugeridos para la actualización de FDD han sido evaluados a través de diferentes puntos de referencia y escenarios. Como parte de los resultados, los algoritmos sugeridos para el CD de manejo supervisado superan el rendimiento del aprendizaje incremental tradicional con respecto al puntaje MGM (puntuación adimensional definida basada en el puntaje F1 ponderado y el tiempo de entrenamiento) hasta en un 50% de mejora. Esta mejora se logra mediante los algoritmos propuestos que detectan y olvidan la información redundante, así como ajustan correctamente la ventana de datos para la actualización oportuna y el reciclaje del sistema de detección de fallas. Además, el marco de actualización FDD no supervisado propuesto para tratar fallas nuevas en condiciones de proceso estáticas y dinámicas logra hasta 90% en términos de la puntuación de NPP (puntuación adimensional definida basada en el número de la clase de muestras correcta predicha). Este resultado se basa en un marco innovador que puede asignar muestras a clases nuevas o a clases disponibles explotando una clase de técnicas de clasificación y enfoques de agrupamientoPostprint (published version

    Control loop measurement based isolation of faults and disturbances in process plants

    Get PDF
    This thesis focuses on the development of data-driven automated techniques to enhance performance assessment methods. These techniques include process control loop status monitoring, fault localisation in a number of interacting control loops and the detection and isolation of multiple oscillations in a multi-loop situation. Not only do they make use of controlled variables, but they also make use of controller outputs, indicator readings, set-points and controller settings. The idea behind loop status is that knowledge of the current behaviour of a loop is important when assessing MVC-based performance, because of the assumptions that are made in the assessment. Current behaviour is defined in terms of the kind of deterministic trend that is present in the loop at the time of assessment. When the status is other than steady, MVC-based approaches are inappropriate. Either the assessment must be delayed until steady conditions are attained or other methods must be applied. When the status is other than steady, knowledge of current behaviour can help identify the possible cause. One way of doing this is to derive another statistic, the overall loop performance index (OLPI), from loop status. The thesis describes a novel fault localisation technique, which analyses this statistic to find the source of a plant-wide disturbance, when a number of interacting control loops are perturbed by a single dominant disturbance/fault. Although the technique can isolate a single dominant oscillation, it is not able to isolate the sources of multiple, dominant oscillations. To do this, a novel technique is proposed that is based on the application of spectral independent component analysis (ICA). Spectral independent component analysis (spectral ICA) is based on the analysis of spectra derived via a discrete Fourier transform from time domain process data. The analysis is able to extract dominant spectrum-like independent components each of which has a narrow-bank peak that captures the behaviour of one of the oscillation sources. It is shown that the extraction of independent components with single spectral peaks can be guaranteed by an ICA algorithm that maximises the kurtosis of the independent components (ICs). This is a significant advantage over spectral principle component analysis (PCA), because multiple spectral peaks could be present in the extracted principle components (PCs), and the interpretation of detection and isolation of oscillation disturbances based on spectral PCs is not straightforward. The novel spectral ICA method is applied to a simulated data set and to real plant data obtained from an industrial chemical plant. Results demonstrate its ability to detect and isolate multiple dominant oscillations in different frequency ranges

    Fault detection and root cause diagnosis using dynamic Bayesian network

    Get PDF
    This thesis presents two real time process fault detection and diagnosis (FDD) techniques incorporating process data and prior knowledge. Unlike supervised monitoring techniques, both these methods can perform without having any prior information of a fault. In the first part of this research, a hybrid methodology is developed combining principal component analysis (PCA), Bayesian network (BN) and multiple uncertain (likelihood) evidence to improve the diagnostic capacity of PCA and existing PCA-BN schemes with hard evidence based updating. A dynamic BN (DBN) based FDD methodology is proposed in the later part of this work which provides detection and accurate diagnosis by a single tool. Furthermore, fault propagation pathway is analyzed using the predictive feature of a BN and cause-effect relationships among the process variables. Proposed frameworks are successfully validated by applying to several process models
    corecore