5 research outputs found

    Fault Isolation in MIMO Systems based on Active Decoupling

    Get PDF

    Set-based state estimation and fault diagnosis using constrained zonotopes and applications

    Full text link
    This doctoral thesis develops new methods for set-based state estimation and active fault diagnosis (AFD) of (i) nonlinear discrete-time systems, (ii) discrete-time nonlinear systems whose trajectories satisfy nonlinear equality constraints (called invariants), (iii) linear descriptor systems, and (iv) joint state and parameter estimation of nonlinear descriptor systems. Set-based estimation aims to compute tight enclosures of the possible system states in each time step subject to unknown-but-bounded uncertainties. To address this issue, the present doctoral thesis proposes new methods for efficiently propagating constrained zonotopes (CZs) through nonlinear mappings. Besides, this thesis improves the standard prediction-update framework for systems with invariants using new algorithms for refining CZs based on nonlinear constraints. In addition, this thesis introduces a new approach for set-based AFD of a class of nonlinear discrete-time systems. An affine parametrization of the reachable sets is obtained for the design of an optimal input for set-based AFD. In addition, this thesis presents new methods based on CZs for set-valued state estimation and AFD of linear descriptor systems. Linear static constraints on the state variables can be directly incorporated into CZs. Moreover, this thesis proposes a new representation for unbounded sets based on zonotopes, which allows to develop methods for state estimation and AFD also of unstable linear descriptor systems, without the knowledge of an enclosure of all the trajectories of the system. This thesis also develops a new method for set-based joint state and parameter estimation of nonlinear descriptor systems using CZs in a unified framework. Lastly, this manuscript applies the proposed set-based state estimation and AFD methods using CZs to unmanned aerial vehicles, water distribution networks, and a lithium-ion cell.Comment: My PhD Thesis from Federal University of Minas Gerais, Brazil. Most of the research work has already been published in DOIs 10.1109/CDC.2018.8618678, 10.23919/ECC.2018.8550353, 10.1016/j.automatica.2019.108614, 10.1016/j.ifacol.2020.12.2484, 10.1016/j.ifacol.2021.08.308, 10.1016/j.automatica.2021.109638, 10.1109/TCST.2021.3130534, 10.1016/j.automatica.2022.11042

    Application of Deep Learning in Chemical Processes: Explainability, Monitoring and Observability

    Get PDF
    The last decade has seen remarkable advances in speech, image, and language recognition tools that have been made available to the public through computer and mobile devices’ applications. Most of these significant improvements were achieved by Artificial Intelligence (AI)/ deep learning (DL) algorithms (Hinton et al., 2006) that generally refers to a set of novel neural network architectures and algorithms such as long-short term memory (LSTM) units, convolutional networks (CNN), autoencoders (AE), t-distributed stochastic embedding (TSNE), etc. Although neural networks are not new, due to a combination of relatively novel improvements in methods for training the networks and the availability of increasingly powerful computers, one can now model much more complex nonlinear dynamic behaviour by using complex structures of neurons, i.e. more layers of neurons, than ever before (Goodfellow et al., 2016). However, it is recognized that the training of neural nets of such complex structures requires a vast amount of data. In this sense manufacturing processes are good candidates for deep learning applications since they utilize computers and information systems for monitoring and control thus generating a massive amount of data. This is especially true in pharmaceutical companies such as Sanofi Pasteur, the industrial collaborator for the current study, where large data sets are routinely stored for monitoring and regulatory purposes. Although novel DL algorithms have been applied with great success in image analysis, speech recognition, and language translation, their applications to chemical processes and pharmaceutical processes, in particular, are scarce. The current work deals with the investigation of deep learning in process systems engineering for three main areas of application: (i) Developing a deep learning classification model for profit-based operating regions. (ii) Developing both supervised and unsupervised process monitoring algorithms. (iii) Observability Analysis It is recognized that most empirical or black-box models, including DL models, have good generalization capabilities but are difficult to interpret. For example, using these methods it is difficult to understand how a particular decision is made, which input variable/feature is greatly influencing the decision made by the DL models etc. This understanding is expected to shed light on why biased results can be obtained or why a wrong class is predicted with a higher probability in classification problems. Hence, a key goal of the current work is on deriving process insights from DL models. To this end, the work proposes both supervised and unsupervised learning approaches to identify regions of process inputs that result in corresponding regions, i.e. ranges of values, of process profit. Furthermore, it will be shown that the ability to better interpret the model by identifying inputs that are most informative can be used to reduce over-fitting. To this end, a neural network (NN) pruning algorithm is developed that provides important physical insights on the system regarding the inputs that have positive and negative effect on profit function and to detect significant changes in process phenomenon. It is shown that pruning of input variables significantly reduces the number of parameters to be estimated and improves the classification test accuracy for both case studies: the Tennessee Eastman Process (TEP) and an industrial vaccine manufacturing process. The ability to store a large amount of data has permitted the use of deep learning (DL) and optimization algorithms for the process industries. In order to meet high levels of product quality, efficiency, and reliability, a process monitoring system is needed. The two aspects of Statistical Process Control (SPC) are fault detection and diagnosis (FDD). Many multivariate statistical methods like PCA and PLS and their dynamic variants have been extensively used for FD. However, the inherent non-linearities in the process pose challenges while using these linear models. Numerous deep learning FDD approaches have also been developed in the literature. However, the contribution plots for identifying the root cause of the fault have not been derived from Deep Neural Networks (DNNs). To this end, the supervised fault detection problem in the current work is formulated as a binary classification problem while the supervised fault diagnosis problem is formulated as a multi-class classification problem to identify the type of fault. Then, the application of the concept of explainability of DNNs is explored with its particular application in FDD problem. The developed methodology is demonstrated on TEP with non-incipient faults. Incipient faults are faulty conditions where signal to noise ratio is small and have not been widely studied in the literature. To address the same, a hierarchical dynamic deep learning algorithm is developed specifically to address the issue of fault detection and diagnosis of incipient faults. One of the major drawbacks of both the methods described above is the availability of labeled data i.e. normal operation and faulty operation data. From an industrial point of view, most data in an industrial setting, especially for biochemical processes, is obtained during normal operation and faulty data may not be available or may be insufficient. Hence, we also develop an unsupervised DL approach for process monitoring. It involves a novel objective function and a NN architecture that is tailored to detect the faults effectively. The idea is to learn the distribution of normal operation data to differentiate among the fault conditions. In order to demonstrate the advantages of the proposed methodology for fault detection, systematic comparisons are conducted with Multiway Principal Component Analysis (MPCA) and Multiway Partial Least Squares (MPLS) on an industrial scale Penicillin Simulator. Past investigations reported that the variability in productivity in the Sanofi's Pertussis Vaccine Manufacturing process may be highly correlated to biological phenomena, i.e. oxidative stresses, that are not routinely monitored by the company. While the company monitors and stores a large amount of fermentation data it may not be sufficiently informative about the underlying phenomena affecting the level of productivity. Furthermore, since the addition of new sensors in pharmaceutical processes requires extensive and expensive validation and certification procedures, it is very important to assess the potential ability of a sensor to observe relevant phenomena before its actual adoption in the manufacturing environment. This motivates the study of the observability of the phenomena from available data. An algorithm is proposed to check the observability for the classification task from the observed data (measurements). The proposed methodology makes use of a Supervised AE to reduce the dimensionality of the inputs. Thereafter, a criterion on the distance between the samples is used to calculate the percentage of overlap between the defined classes. The proposed algorithm is tested on the benchmark Tennessee Eastman process and then applied to the industrial vaccine manufacturing process

    Apprentissage profond pour l'aide à la détection d'anomalies dans l'industrie 4.0

    Get PDF
    L’industrie 4.0 (I4.0) correspond à une nouvelle façon de planifier, d’organiser, et d’optimiser les systèmes de production. Par conséquent, l’exploitation croissante de ces systèmes grâce à la présence de nombreux objets connectés, et la transformation digitale offrent de nouvelles opportunités pour rendre les usines intelligentes et faire du smart manufacturing. Cependant, ces technologies se heurtent à de nombre défis. Une façon de leurs d’appréhender consiste à automatiser les processus. Cela permet d’augmenter la disponibilité, la rentabilité, l’efficacité et de l’usine. Cette thèse porte donc sur l’automatisation de l’I4.0 via le développement des outils d’aide à la décision basés sur des modèles d’IA guidés par les données et par la physique. Au-delà des aspects théoriques, la contribution et l’originalité de notre étude consistent à implémenter des modèles hybrides, explicable et généralisables pour la Maintenance Prédictive (PdM). Pour ce motif, nous avons développé deux approches pour expliquer les modèles : En extrayant les connaissances locales et globales des processus d’apprentissage pour mettre en lumière les règles de prise de décision via la technique l’intelligence artificielle explicable (XAI) et en introduisant des connaissances ou des lois physiques pour informer ou guider le modèle. À cette fin, notre étude se concentrera sur trois principaux points : Premièrement, nous présenterons un état de l’art des techniques de détection d’anomalies et de PdM4.0. Nous exploiterons l’analyse bibliométrique pour extraire et analyser des informations pertinentes provenant de la base de données Web of Science. Ces analyses fournissent des lignes directrices utiles pouvant aider les chercheurs et les praticiens à comprendre les principaux défis et les questions scientifiques les plus pertinentes liées à l’IA et la PdM. Deuxièmement, nous avons développé deux Framework qui sont basés sur des réseaux de neurones profonds (DNN). Le premier est formé de deux modules à savoir un DNN et un Deep SHapley Additive exPlanations (DeepSHAP). Le module DNN est utilisé pour résoudre les tâches de classification multi-classes déséquilibrées des états du système hydraulique. Malgré leurs performances, certaines questions subsistent quant à la fiabilité et la transparence des DNNs en tant que modèle à "boîte noire". Pour répondre à cette question, nous avons développé un second module nommé DeepSHAP. Ce dernier montrant l’importance et la contribution de chaque variable dans la prise de décision de l’algorithme. En outre, elle favorise la compréhension du processus et guide les humains à mieux comprendre, interpréter et faire confiance aux modèles d’IA. Le deuxième Framework hybride est connu sous le nom de Physical-Informed Deep Neural Networks (PINN). Ce modèle est utilisé pour prédire les états du processus de soudage par friction malaxage. Le PINN consiste à introduire des connaissances explicites ou des contraintes physiques dans l’algorithme d’apprentissage. Cette contrainte fournit une meilleure connaissance et oblige le modèle à suivre la topologie du processus. Une fois formés, les PINNs peuvent remplacer les simulations numériques qui demandent beaucoup de temps de calcul. En résumé, ce travail ouvre des perspectives nouvelles et prometteuses domaine de l’explicabilité des modèles d’AI appliqués aux problématiques de PdM 4.0. En particulier, l’exploitation de ces Framework contribuent à une connaissance plus précise du système
    corecore