544 research outputs found

    A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring

    Get PDF
    Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries

    Data-Driven Fault Detection and Reasoning for Industrial Monitoring

    Get PDF
    This open access book assesses the potential of data-driven methods in industrial process monitoring engineering. The process modeling, fault detection, classification, isolation, and reasoning are studied in detail. These methods can be used to improve the safety and reliability of industrial processes. Fault diagnosis, including fault detection and reasoning, has attracted engineers and scientists from various fields such as control, machinery, mathematics, and automation engineering. Combining the diagnosis algorithms and application cases, this book establishes a basic framework for this topic and implements various statistical analysis methods for process monitoring. This book is intended for senior undergraduate and graduate students who are interested in fault diagnosis technology, researchers investigating automation and industrial security, professional practitioners and engineers working on engineering modeling and data processing applications. This is an open access book

    Data-Driven Fault Detection and Reasoning for Industrial Monitoring

    Get PDF
    This open access book assesses the potential of data-driven methods in industrial process monitoring engineering. The process modeling, fault detection, classification, isolation, and reasoning are studied in detail. These methods can be used to improve the safety and reliability of industrial processes. Fault diagnosis, including fault detection and reasoning, has attracted engineers and scientists from various fields such as control, machinery, mathematics, and automation engineering. Combining the diagnosis algorithms and application cases, this book establishes a basic framework for this topic and implements various statistical analysis methods for process monitoring. This book is intended for senior undergraduate and graduate students who are interested in fault diagnosis technology, researchers investigating automation and industrial security, professional practitioners and engineers working on engineering modeling and data processing applications. This is an open access book

    Process Monitoring and Data Mining with Chemical Process Historical Databases

    Get PDF
    Modern chemical plants have distributed control systems (DCS) that handle normal operations and quality control. However, the DCS cannot compensate for fault events such as fouling or equipment failures. When faults occur, human operators must rapidly assess the situation, determine causes, and take corrective action, a challenging task further complicated by the sheer number of sensors. This information overload as well as measurement noise can hide information critical to diagnosing and fixing faults. Process monitoring algorithms can highlight key trends in data and detect faults faster, reducing or even preventing the damage that faults can cause. This research improves tools for process monitoring on different chemical processes. Previously successful monitoring methods based on statistics can fail on non-linear processes and processes with multiple operating states. To address these challenges, we develop a process monitoring technique based on multiple self-organizing maps (MSOM) and apply it in industrial case studies including a simulated plant and a batch reactor. We also use standard SOM to detect a novel event in a separation tower and produce contribution plots which help isolate the causes of the event. Another key challenge to any engineer designing a process monitoring system is that implementing most algorithms requires data organized into “normal” and “faulty”; however, data from faulty operations can be difficult to locate in databases storing months or years of operations. To assist in identifying faulty data, we apply data mining algorithms from computer science and compare how they cluster chemical process data from normal and faulty conditions. We identify several techniques which successfully duplicated normal and faulty labels from expert knowledge and introduce a process data mining software tool to make analysis simpler for practitioners. The research in this dissertation enhances chemical process monitoring tasks. MSOM-based process monitoring improves upon standard process monitoring algorithms in fault identification and diagnosis tasks. The data mining research reduces a crucial barrier to the implementation of monitoring algorithms. The enhanced monitoring introduced can help engineers develop effective and scalable process monitoring systems to improve plant safety and reduce losses from fault events

    Application of upscaling methods for fluid flow and mass transport in multi-scale heterogeneous media : A critical review

    Get PDF
    Physical and biogeochemical heterogeneity dramatically impacts fluid flow and reactive solute transport behaviors in geological formations across scales. From micro pores to regional reservoirs, upscaling has been proven to be a valid approach to estimate large-scale parameters by using data measured at small scales. Upscaling has considerable practical importance in oil and gas production, energy storage, carbon geologic sequestration, contamination remediation, and nuclear waste disposal. This review covers, in a comprehensive manner, the upscaling approaches available in the literature and their applications on various processes, such as advection, dispersion, matrix diffusion, sorption, and chemical reactions. We enclose newly developed approaches and distinguish two main categories of upscaling methodologies, deterministic and stochastic. Volume averaging, one of the deterministic methods, has the advantage of upscaling different kinds of parameters and wide applications by requiring only a few assumptions with improved formulations. Stochastic analytical methods have been extensively developed but have limited impacts in practice due to their requirement for global statistical assumptions. With rapid improvements in computing power, numerical solutions have become more popular for upscaling. In order to tackle complex fluid flow and transport problems, the working principles and limitations of these methods are emphasized. Still, a large gap exists between the approach algorithms and real-world applications. To bridge the gap, an integrated upscaling framework is needed to incorporate in the current upscaling algorithms, uncertainty quantification techniques, data sciences, and artificial intelligence to acquire laboratory and field-scale measurements and validate the upscaled models and parameters with multi-scale observations in future geo-energy research.© 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)This work was jointly supported by the National Key Research and Development Program of China (No. 2018YFC1800900 ), National Natural Science Foundation of China (No: 41972249 , 41772253 , 51774136 ), the Program for Jilin University (JLU) Science and Technology Innovative Research Team (No. 2019TD-35 ), Graduate Innovation Fund of Jilin University (No: 101832020CX240 ), Natural Science Foundation of Hebei Province of China ( D2017508099 ), and the Program of Education Department of Hebei Province ( QN219320 ). Additional funding was provided by the Engineering Research Center of Geothermal Resources Development Technology and Equipment , Ministry of Education, China.fi=vertaisarvioitu|en=peerReviewed

    COMPUTATIONAL SCIENCE CENTER

    Full text link

    ADVANCES ON BILINEAR MODELING OF BIOCHEMICAL BATCH PROCESSES

    Full text link
    [EN] This thesis is aimed to study the implications of the statistical modeling approaches proposed for the bilinear modeling of batch processes, develop new techniques to overcome some of the problems that have not been yet solved and apply them to data of biochemical processes. The study, discussion and development of the new methods revolve around the four steps of the modeling cycle, from the alignment, preprocessing and calibration of batch data to the monitoring of batches trajectories. Special attention is given to the problem of the batch synchronization, and its effect on the modeling from different angles. The manuscript has been divided into four blocks. First, a state-of- the-art of the latent structures based-models in continuous and batch processes and traditional univariate and multivariate statistical process control systems is carried out. The second block of the thesis is devoted to the preprocessing of batch data, in particular, to the equalization and synchronization of batch trajectories. The first section addresses the problem of the lack of equalization in the variable trajectories. The different types of unequalization scenarios that practitioners might finnd in batch processes are discussed and the solutions to equalize batch data are introduced. In the second section, a theoretical study of the nature of batch processes and of the synchronization of batch trajectories as a prior step to bilinear modeling is carried out. The topics under discussion are i) whether the same synchronization approach must be applied to batch data in presence of different types of asynchronisms, and ii) whether synchronization is always required even though the length of the variable trajectories are constant across batches. To answer these questions, a thorough study of the most common types of asynchronisms that may be found in batch data is done. Furthermore, two new synchronization techniques are proposed to solve the current problems in post-batch and real-time synchronization. To improve fault detection and classification, new unsupervised control charts and supervised fault classifiers based on the information generated by the batch synchronization are also proposed. In the third block of the manuscript, a research work is performed on the parameter stability associated with the most used synchronization methods and principal component analysis (PCA)-based Batch Multivariate Statistical Process Control methods. The results of this study have revealed that accuracy in batch synchronization has a profound impact on the PCA model parameters stability. Also, the parameter stability is closely related to the type of preprocessing performed in batch data, and the type of model and unfolding used to transform the three-way data structure to two-way. The setting of the parameter stability, the source of variability remaining after preprocessing and the process dynamics should be balanced in such a way that multivariate statistical models are accurate in fault detection and diagnosis and/or in online prediction. Finally, the fourth block introduces a graphical user-friendly interface developed in Matlab code for batch process understanding and monitoring. To perform multivariate analysis, the last developments in process chemometrics, including the methods proposed in this thesis, are implemented.[ES] La presente tesis doctoral tiene como objetivo estudiar las implicaciones de los métodos estadísticos propuestos para la modelización bilineal de procesos por lotes, el desarrollo de nuevas técnicas para solucionar algunos de los problemas más complejos aún por resolver en esta línea de investigación y aplicar los nuevos métodos a datos provenientes de procesos bioquímicos para su evaluación estadística. El estudio, la discusión y el desarrollo de los nuevos métodos giran en torno a las cuatro fases del ciclo de modelización: desde la sincronización, ecualización, preprocesamiento y calibración de los datos, a la monitorización de las trayectorias de las variables del proceso. Se presta especial atención al problema de la sincronización y su efecto en la modelización estadística desde distintas perspectivas. El manuscrito se ha dividido en cuatro grandes bloques. En primer lugar, se realiza una revisión bibliográfica de las técnicas de proyección sobre estructuras latentes para su aplicación en procesos continuos y por lotes, y del diseño de sistemas de control basados en modelos estadísticos multivariantes. El segundo bloque del documento versa sobre el preprocesamiento de los datos, en concreto, sobre la ecualización y la sincronización. La primera parte aborda el problema de la falta de ecualización en las trayectorias de las variables. Se discuten las diferentes políticas de muestreo que se pueden encontrar en procesos por lotes y las soluciones para ecualizar las variables. En la segunda parte de esta sección, se realiza un estudio teórico sobre la naturaleza de los procesos por lotes y de la sincronización de las trayectorias como paso previo a la modelización bilineal. Los temas bajo discusión son: i) si se debe utilizar el mismo enfoque de sincronización en lotes afectados por diferentes tipos de asincronismos, y ii) si la sincronización es siempre necesaria aún y cuando las trayectorias de las variables tienen la misma duración en todos los lotes. Para responder a estas preguntas, se lleva a cabo un estudio exhaustivo de los tipos más comunes de asincronismos que se pueden encontrar en este tipo de datos. Además, se proponen dos nuevas técnicas de sincronización para resolver los problemas existentes en aplicaciones post-morten y en tiempo real. Para mejorar la detección de fallos y la clasificación, también se proponen nuevos gráficos de control no supervisados y clasificadores de fallos supervisados en base a la información generada por la sincronización de los lotes. En el tercer bloque del manuscrito se realiza un estudio de la estabilidad de los parámetros asociados a los métodos de sincronización y a los métodos estadístico multivariante basados en el Análisis de Componentes Principales (PCA) más utilizados para el control de procesos. Los resultados de este estudio revelan que la precisión de la sincronización de las trayectorias tiene un impacto significativo en la estabilidad de los parámetros de los modelos PCA. Además, la estabilidad paramétrica está estrechamente relacionada con el tipo de preprocesamiento realizado en los datos de los lotes, el tipo de modelo a justado y el despliegue utilizado para transformar la estructura de datos de tres a dos dimensiones. El ajuste de la estabilidad de los parámetros, la fuente de variabilidad que queda después del preprocesamiento de los datos y la captura de las dinámicas del proceso deben ser a justados de forma equilibrada de tal manera que los modelos estadísticos multivariantes sean precisos en la detección y diagnóstico de fallos y/o en la predicción en tiempo real. Por último, el cuarto bloque del documento describe una interfaz gráfica de usuario que se ha desarrollado en código Matlab para la comprensión y la supervisión de procesos por lotes. Para llevar a cabo los análisis multivariantes, se han implementado los últimos desarrollos en la quimiometría de proc[CA] Aquesta tesi doctoral te com a objectiu estudiar les implicacions dels mètodes de modelització estadística proposats per a la modelització bilineal de processos per lots, el desenvolupament de noves tècniques per resoldre els problemes encara no resolts en aquesta línia de recerca i aplicar els nous mètodes a les dades dels processos bioquímics. L'estudi, la discussió i el desenvolupament dels nous mètodes giren entorn a les quatre fases del cicle de modelització, des de l'alineació, preprocessament i el calibratge de les dades provinents de lots, a la monitorització de les trajectòries. Es presta especial atenció al problema de la sincronització per lots, i el seu efecte sobre el modelatge des de diferents angles. El manuscrit s'ha dividit en quatre grans blocs. En primer lloc, es realitza una revisió bibliogràfica dels principals mètodes basats en tècniques de projecció sobre estructures latents en processos continus i per lots, així com dels sistemes de control estadístics multivariats. El segon bloc del document es dedica a la preprocessament de les dades provinents de lots, en particular, l' equalització i la sincronització. La primera part aborda el problema de la manca d'equalització en les trajectòries de les variables. Es discuteixen els diferents tipus d'escenaris en que les variables estan mesurades a distints intervals i les solucions per equalitzar-les en processos per lots. A la segona part d'aquesta secció es porta a terme un estudi teòric de la naturalesa dels processos per lots i de la sincronització de les trajectòries de lots com a pas previ al modelatge bilineal. Els temes en discussió són: i) si el mateix enfocament de sincronització ha de ser aplicat a les dades del lot en presència de diferents tipus de asincronismes, i ii) si la sincronització sempre es requereix tot i que la longitud de les trajectòries de les variables són constants en tots el lots. Per respondre a aquestes preguntes, es du a terme un estudi exhaustiu dels tipus més comuns de asincronismes que es poden trobar en les dades provinents de lots. A més, es proposen dues noves tècniques de sincronització per resoldre els problemes existents la sincronització post-morten i en temps real. Per millorar la detecció i la classificació de anomalies, també es proposen nous gràfics de control no supervisats i classificadors de falla supervisats dissenyats en base a la informació generada per la sincronització de lots. En el tercer bloc del manuscrit es realitza un treball de recerca sobre l'estabilitat dels paràmetres associats als mètodes de sincronització i als mètodes estadístics multivariats basats en l'Anàlisi de Components Principals (PCA) més utilitzats per al control de processos. Els resultats d'aquest estudi revelen que la precisió en la sincronització per lots te un profund impacte en l'estabilitat dels paràmetres dels models PCA. A més, l'estabilitat paramètrica està estretament relacionat amb el tipus de preprocessament realitzat en les dades provinents de lots, el tipus de model i el desplegament utilitzat per transformar l'estructura de dades de tres a dos dimensions. L'ajust de l'estabilitat dels paràmetres, la font de variabilitat que queda després del preprocessament i la captura de la dinàmica de procés ha de ser equilibrada de tal manera que els models estadístics multivariats són precisos en la detecció i diagnòstic de fallades i/o en la predicció en línia. Finalment, el quart bloc del document introdueix una interfície gràfica d'usuari que s'ha dissenyat e implementat en Matlab per a la comprensió i la supervisió de processos per lots. Per dur a terme aquestes anàlisis multivariats, s'han implementat els últims desenvolupaments en la quimiometria de processos, incloent-hi els mètodes proposats en aquesta tesi.González Martínez, JM. (2015). ADVANCES ON BILINEAR MODELING OF BIOCHEMICAL BATCH PROCESSES [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/55684TESISPremios Extraordinarios de tesis doctorale

    A collaborative, multi-agent based methodology for abnormal events management

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Effect of curing conditions and harvesting stage of maturity on Ethiopian onion bulb drying properties

    Get PDF
    The study was conducted to investigate the impact of curing conditions and harvesting stageson the drying quality of onion bulbs. The onion bulbs (Bombay Red cultivar) were harvested at three harvesting stages (early, optimum, and late maturity) and cured at three different temperatures (30, 40 and 50 oC) and relative humidity (30, 50 and 70%). The results revealed that curing temperature, RH, and maturity stage had significant effects on all measuredattributesexcept total soluble solids
    corecore