1,665 research outputs found

    A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring

    Get PDF
    Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries

    Data-driven, mechanistic and hybrid modelling for statistical fault detection and diagnosis in chemical processes

    Get PDF
    Research and applications of multivariate statistical process monitoring and fault diagnostic techniques for performance monitoring of continuous and batch processes continue to be a very active area of research. Investigations into new statistical and mathematical methods and there applicability to chemical process modelling and performance monitoring is ongoing. Successive researchers have proposed new techniques and models to address the identified limitations and shortcomings of previously applied linear statistical methods such as principal component analysis and partial least squares. This thesis contributes to this volume of research and investigation into alternative approaches and their suitability for continuous and batch process applications. In particular, the thesis proposes a modified canonical variate analysis state space model based monitoring scheme and compares the proposed scheme with several existing statistical process monitoring approaches using a common benchmark simulator – Tennessee Eastman benchmark process. A hybrid data driven and mechanistic model based process monitoring approach is also investigated. The proposed hybrid scheme gives more specific considerations to the implementation and application of the technique for dynamic systems with existing control structures. A nonmechanistic hybrid approach involving the combination of nonlinear and linear data based statistical models to create a pseudo time-variant model for monitoring of large complex plants is also proposed. The hybrid schemes are shown to provide distinct advantages in terms of improved fault detection and reliability. The demonstration of the hybrid schemes were carried out on two separate simulated processes: a CSTR with recycle through a heat exchanger and a CHEMCAD simulated distillation column. Finally, a batch process monitoring schemed based on a proposed implementation of interval partial least squares (IPLS) technique is demonstrated using a benchmark simulated fed-batch penicillin production process. The IPLS strategy employs data unfolding methods and a proposed algorithm for segmentation of the batch duration into optimal intervals to give a unique implementation of a Multiway-IPLS model. Application results show that the proposed method gives better model prediction and monitoring performance than the conventional IPLS approach.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Induction Machine Stator Fault Tracking using the Growing Curvilinear Component Analysis

    Get PDF
    Detection of stator-based faults in Induction Machines (IMs) can be carried out in numerous ways. In particular, the shorted turns in stator windings of IM are among the most common faults in the industry. As a matter of fact, most IMs come with pre-installed current sensors for the purpose of control and protection. At this aim, using only the stator current for fault detection has become a recent trend nowadays as it is much cheaper than installing additional sensors. The three-phase stator current signatures have been used in this study to observe the effect of stator inter-turn fault with respect to the healthy condition of the IM. The pre-processing of the healthy and faulty current signatures has been done via the in-built DSP module of dSPACE after which, these current signatures are passed into the MATLAB® software for further analysis using AI techniques. The authors present a Growing Curvilinear Component Analysis (GCCA) neural network that is capable of detecting and follow the evolution of the stator fault using the stator current signature, making online fault detection possible. For this purpose, a topological manifold analysis is carried out to study the fault evolution, which is a fundamental step for calibrating the GCCA neural network. The effectiveness of the proposed method has been verified experimentally

    Tracking Evolution of Stator-based Fault in Induction Machines using the Growing Curvilinear Component Analysis Neural Network

    Get PDF
    Stator-based faults are one of the most common faults among induction motors (IMs). The conventional approach to IM control and protection employs current sensors installed on the motor. Recently, most studies have focused on fault detection by means of stator current. This paper presents an application of the Growing Curvilinear Component Analysis (GCCA) neural network aided by the Extended Park Vector Approach (EPVA) for the purpose of transforming the three-phase current signals. The GCCA is a growing neural based technique specifically designed to detect and follow changes in the input distribution, e.g. stator faults. In particular, the GCCA has proven its capability of correctly identifying and tracking stator inter-turn fault in IMs. To this purpose, the three-phase stator currents have been acquired from IMs, which start at healthy operating state and, evolve to different fault severities (up to 10%) under different loading conditions. Data has been transformed using the EPVA and pre-processed to extract statistical time domain features. To calibrate the GCCA neural network, a topological manifold analysis has been carried out to study the input features. The efficacy of the proposed method has been verified experimentally using IM with l.lkW rating and has potential for IMs with different manufacturing conditions

    Multivariate statistical process monitoring using classical multidimensional scaling

    Get PDF
    A new Multivariate Statistical Process Monitoring (MSPM) system, which comprises of three main frameworks, is proposed where the system utilizes Classical Multidimensional Scaling (CMDS) as the main multivariate data compression technique instead of using the linearbased Principal Component Analysis (PCA). The conventional method which usually applies variance-covariance or correlation measure in developing the multivariate scores is found to be inappropriately used especially in modelling nonlinear processes, where a high number of principal components will be typically required. Alternatively, the proposed method utilizes the inter-dissimilarity scales in describing the relationships among the monitored variables instead of variance-covariance measure for the multivariate scores development. However, the scores are plotted in terms of variable structure, thus providing different formulation of statistics for monitoring. Nonetheless, the proposed statistics still correspond to the conceptual objective of Hotelling’s T2 and Squared Prediction Errors (SPE). The first framework corresponds to the original CMDS framework, whereas the second utilizes Procrustes Analysis (PA) functions which is analogous to the concept of loading factors in PCA for score projection. Lastly, the final framework employs dynamic mechanism of PA functions as an alternative for enhancing the procedures of the second approach. A simulated system of Continuous Stirred Tank Reactor with Recycle (CSTRwR) has been chosen for the demonstration and the fault detection results were comparatively analyzed to the outcomes of PCA on the grounds of false alarm rates, total number of detected cases and also total number of fastest detection cases. The last two performance factors are obtained through fault detection time. The overall outcomes show that the three CMDS-based systems give almost comparable performances to the linear PCA based monitoring systemwhen dealing the abrupt fault events, whereas the new systems have demonstrated significant improvement over the conventional method in detecting incipient fault cases. More importantly, this monitoring accomplishment can be efficiently executed based on lower compressed dimensional space compared to the PCA technique, thus providing much simpler solution. All of these evidences verified that the proposed approaches are successfully developed conceptually as well as practically for monitoring while complying fundamentally with the principles and technical steps of the conventional MSPM system.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Multivariate statistical process monitoring using classical multidimensional scaling

    Get PDF
    A new Multivariate Statistical Process Monitoring (MSPM) system, which comprises of three main frameworks, is proposed where the system utilizes Classical Multidimensional Scaling (CMDS) as the main multivariate data compression technique instead of using the linearbased Principal Component Analysis (PCA). The conventional method which usually applies variance-covariance or correlation measure in developing the multivariate scores is found to be inappropriately used especially in modelling nonlinear processes, where a high number of principal components will be typically required. Alternatively, the proposed method utilizes the inter-dissimilarity scales in describing the relationships among the monitored variables instead of variance-covariance measure for the multivariate scores development. However, the scores are plotted in terms of variable structure, thus providing different formulation of statistics for monitoring. Nonetheless, the proposed statistics still correspond to the conceptual objective of Hotelling’s T2 and Squared Prediction Errors (SPE). The first framework corresponds to the original CMDS framework, whereas the second utilizes Procrustes Analysis (PA) functions which is analogous to the concept of loading factors in PCA for score projection. Lastly, the final framework employs dynamic mechanism of PA functions as an alternative for enhancing the procedures of the second approach. A simulated system of Continuous Stirred Tank Reactor with Recycle (CSTRwR) has been chosen for the demonstration and the fault detection results were comparatively analyzed to the outcomes of PCA on the grounds of false alarm rates, total number of detected cases and also total number of fastest detection cases. The last two performance factors are obtained through fault detection time. The overall outcomes show that the three CMDS-based systems give almost comparable performances to the linear PCA based monitoring systemwhen dealing the abrupt fault events, whereas the new systems have demonstrated significant improvement over the conventional method in detecting incipient fault cases. More importantly, this monitoring accomplishment can be efficiently executed based on lower compressed dimensional space compared to the PCA technique, thus providing much simpler solution. All of these evidences verified that the proposed approaches are successfully developed conceptually as well as practically for monitoring while complying fundamentally with the principles and technical steps of the conventional MSPM system.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Increasing the robustness of autonomous systems to hardware degradation using machine learning

    Get PDF
    Autonomous systems perform predetermined tasks (missions) with minimum supervision. In most applications, the state of the world changes with time. Sensors are employed to measure part or whole of the world’s state. However, sensors often fail amidst operation; feeding as such decision-making with wrong information about the world. Moreover, hardware degradation may alter dynamic behaviour, and subsequently the capabilities, of an autonomous system; rendering the original mission infeasible. This thesis applies machine learning to yield powerful and robust tools that can facilitate autonomy in modern systems. Incremental kernel regression is used for dynamic modelling. Algorithms of this sort are easy to train and are highly adaptive. Adaptivity allows for model adjustments, whenever the environment of operation changes. Bayesian reasoning provides a rigorous framework for addressing uncertainty. Moreover, using Bayesian Networks, complex inference regarding hardware degradation can be answered. Specifically, adaptive modelling is combined with Bayesian reasoning to yield recursive estimation algorithms that are robust to sensor failures. Two solutions are presented by extending existing recursive estimation algorithms from the robotics literature. The algorithms are deployed on an underwater vehicle and the performance is assessed in real-world experiments. A comparison against standard filters is also provided. Next, the previous algorithms are extended to consider sensor and actuator failures jointly. An algorithm that can detect thruster failures in an Autonomous Underwater Vehicle has been developed. Moreover, the algorithm adapts the dynamic model online to compensate for the detected fault. The performance of this algorithm was also tested in a real-world application. One step further than hardware fault detection, prognostics predict how much longer can a particular hardware component operate normally. Ubiquitous sensors in modern systems render data-driven prognostics a viable solution. However, training is based on skewed datasets; datasets where the samples from the faulty region of operation are much fewer than the ones from the healthy region of operation. This thesis presents a prognostic algorithm that tackles the problem of imbalanced (skewed) datasets

    Improved Slow Feature Analysis for Process Monitoring

    Get PDF
    Unsupervised multivariate statistical analysis models are valuable tools for process monitoring and fault diagnosis. Among them, slow feature analysis (SFA) is widely studied and used due to its explicit statistical properties, which aims to extract invariant features of temporally varying signals. This inclusion of dynamics in the model is important when working with process data where new samples are highly correlated to previous ones. However, the existing variations of SFA models cannot exploit increasingly tremendous data volume in modern industries, since they require the data to be fed in as a whole in the training stage. Further, sparsity is also desirable to provide interpretable models and prevent model overfitting. To address the aforementioned issues, a novel algorithm for inducing sparsity in SFA is first introduced, which is referred to as manifold sparse SFA (MSSFA). The non-smooth sparse SFA objective function is optimized using proximal gradient descent and the SFA constraint is fulfilled using manifold optimization. An associated fault detection and diagnosis framework is developed that retains the unsupervised nature of SFA. When compared to SFA, sparse SFA (SSFA), and sparse principal component analysis (SPCA), MSSFA shows superior performance in computational complexity, interpretability, fault detection, and fault diagnosis on the Tennessee Eastman process (TEP) and three-phase flow facility (TPFF) data sets. Furthermore, its sparsity is much improved over SFA and SSFA. Further, to exploit the increasing number of collected samples efficiently, a covariance free incremental SFA (IncSFA) is adapted in this work, which handles massive data efficiently and has a linear feature updating complexity with respect to data dimensionality. The IncSFA based process monitoring scheme is also proposed for anomaly detection. Further, a new incremental MSSFA (IncMSSFA) algorithm is also introduced that is able to use the same monitoring scheme. These two algorithms are compared against recursive SFA (RSFA) which can also process data incrementally. The efficiency of IncSFA-based monitoring is demonstrated with the TEP and TPFF data sets. The inclusion of sparsity in the IncMSSFA method provides superior monitoring performance at the cost of a quadratic complexity in terms of data dimensionality. This complexity is still an improvement over the cubic complexity of RSFA

    Design of a Multi-Agent System for Process Monitoring and Supervision

    Get PDF
    New process monitoring and control strategies are developing every day together with process automation strategies to satisfy the needs of diverse industries. New automation systems are being developed with more capabilities for safety and reliability issues. Fault detection and diagnosis, and process monitoring and supervision are some of the new and promising growth areas in process control. With the help of the development of powerful computer systems, the extensive amount of process data from all over the plant can be put to use in an efficient manner by storing and manipulation. With this development, data-driven process monitoring approaches had the chance to emerge compared to model-based process monitoring approaches, where the quantitative model is known as a priori knowledge. Therefore, the objective of this research is to layout the basis for designing and implementing a multi-agent system for process monitoring and supervision. The agent-based programming approach adopted in our research provides a number of advantages, such as, flexibility, adaptation and ease of use. In its current status, the designed multi-agent system architecture has the three different functionalities ready for use for process monitoring and supervision. It allows: a) easy manipulation and preprocessing of plant data both for training and online application; b) detection of process faults; and c) diagnosis of the source of the fault. In addition, a number of alternative data driven techniques were implemented to perform monitoring and supervision tasks: Principal Component Analysis (PCA), Fisher Discriminant Analysis (FDA), and Self-Organizing Maps (SOM). The process system designed in this research project is generic in the sense that it can be used for multiple applications. The process monitoring system is successfully tested with Tennessee Eastman Process application. Fault detection rates and fault diagnosis rates are compared amongst PCA, FDA, and SOM for different faults using the proposed framework

    A model-based monitoring system for a space-based astrometry mission

    Get PDF
    Astrometric space missions like Hipparcos, DIVA, Gaia have to simultaneously determine a tremendous number of parameters concerning astrometric and other stellar properties, the satellite's attitude as well as the geometric and photometric calibration of the instrument. To reach the targeted level of precision for these missions many months of observational data have to be incorporated intos a global, coherent and interleaved data reduction. It is inevitable that a daily data reduction process is required in order to judge if the level of precision of the stellar, attitude and instrument parameters achieve its targeted level. This sophisticated data analysis is the in-depth scientific assessment of the quality of all observations within about 24 hours after its reception. It is based on the very complicated procedure "First Look preprocessing" (more known as a Great-Circle reduction from Hipparcos) that provides a one-dimensional, self-consistent and simultaneous solution of the attitude, the instrument calibration and celestial source parameters. For this purpose one needs to process all the 24-hours-data, a task which can be only performed at the Data Center with its computer resources. On the other hand, it is necessary and reasonable to process the observations at the ground Space Operations Center for a quick discovery of delicate changes in the spacecraft performance in the quasi-real time constraints (15÷3015\div 30~min after data reception). For this latter purpose, the concept of a model-based monitoring system has been developed that comprises activities concerning scientific data health of an astrometrical satellite which can not be guaranteed by only standard procedures applied to typical space missions. This monitoring system, called Science Quick Look (ScQL), performs the preliminary scientific assessment of the instrument and proper astrometric working of the spacecraft at the (coarse) level of precision attainable at this stage. The prototype of this software is designed in the framework of the DIVA project, providing monitoring, diagnostic and visualization tools. It performs the first scientific assessment of the geometric stability of the instrument and proper working of the spacecraft. The process of the monitoring is based on a model of the Galaxy, on the structure and behavior of the components of the spacecraft and its scanning strategy. The system incorporates a simulator of the observations of stars -- a core of our model, that allows to mimic the work of the on-board software and to simulate star transits. The results of an evaluation of our system look very promising, so we plan to pursue further studies in this area. As the DIVA project was stopped, we will adopt our approach to the next space-based astrometry mission, Gaia, which will be launched in 2012. Indeed, many aspects for the rapid assessment of payload and spacecraft health, developed in this work in the framework of DIVA project, are analogous to those in Gaia due to the fact that the basic principle and geometry of the measurements are the same. A successful completion of the ScQL prototype for the DIVA mission provides the basis for our belief that a ScQL monitoring system for the larger project -- Gaia -- is achievable in terms of the developed concept. Building a ScQL monitoring system for Gaia therefore would become a lot easier if the important steps have already been done in the DIVA project. It is evident, however, that this work has to evolve and grow, along with the concept of the Gaia satellite
    corecore