825 research outputs found

    An Efficient Quality-Related Fault Diagnosis Method for Real-Time Multimode Industrial Process

    Get PDF
    Focusing on quality-related complex industrial process performance monitoring, a novel multimode process monitoring method is proposed in this paper. Firstly, principal component space clustering is implemented under the guidance of quality variables. Through extraction of model tags, clustering information of original training data can be acquired. Secondly, according to multimode characteristics of process data, the monitoring model integrated Gaussian mixture model with total projection to latent structures is effective after building the covariance description form. The multimode total projection to latent structures (MTPLS) model is the foundation of problem solving about quality-related monitoring for multimode processes. Then, a comprehensive statistics index is defined which is based on the posterior probability of the monitored samples belonging to each Gaussian component in the Bayesian theory. After that, a combined index is constructed for process monitoring. Finally, motivated by the application of traditional contribution plot in fault diagnosis, a gradient contribution rate is applied for analyzing the variation of variable contribution rate along samples. Our method can ensure the implementation of online fault monitoring and diagnosis for multimode processes. Performances of the whole proposed scheme are verified in a real industrial, hot strip mill process (HSMP) compared with some existing methods

    Anomaly detection and mode identification in multimode processes using the field Kalman filter

    Get PDF
    A process plant can have multiple modes of operation due to varying demand, availability of resources or the fundamental design of a process. Each of these modes is considered as normal operation. Anomalies in the process are characterised as deviations away from normal operation. Such anomalies can be indicative of developing faults which, if left unresolved, can lead to failures and unplanned downtime. The Field Kalman Filter (FKF) is a model-based approach, which is adopted in this paper for monitoring a multimode process. Previously, the FKF has been applied in process monitoring to differentiate normal operation from known faulty modes of operation. This paper extends the FKF so that it may detect occurrences of anomalies and differentiate them from the various normal modes of operation. A method is proposed for offline training an FKF monitoring model and on-line monitoring. The off-line part comprises training an FKF model based on Multivariate Autoregressive State-Space (MARSS) models fitted to historical process data. A monitoring indicator is also introduced. On-line monitoring, on the basis of the FKF for anomaly detection and mode identification, is demonstrated using a simulated multimode process. The performance of the proposed method is also demonstrated using data obtained from a pilot scale multiphase flow facility. The results show that the method can be applied successfully for anomaly detection and mode identification

    Approximate Gaussian conjugacy: parametric recursive filtering under nonlinearity, multimodality, uncertainty, and constraint, and beyond

    Get PDF
    Since the landmark work of R. E. Kalman in the 1960s, considerable efforts have been devoted to time series state space models for a large variety of dynamic estimation problems. In particular, parametric filters that seek analytical estimates based on a closed-form Markov–Bayes recursion, e.g., recursion from a Gaussian or Gaussian mixture (GM) prior to a Gaussian/GM posterior (termed ‘Gaussian conjugacy’ in this paper), form the backbone for a general time series filter design. Due to challenges arising from nonlinearity, multimodality (including target maneuver), intractable uncertainties (such as unknown inputs and/or non-Gaussian noises) and constraints (including circular quantities), etc., new theories, algorithms, and technologies have been developed continuously to maintain such a conjugacy, or to approximate it as close as possible. They had contributed in large part to the prospective developments of time series parametric filters in the last six decades. In this paper, we review the state of the art in distinctive categories and highlight some insights that may otherwise be easily overlooked. In particular, specific attention is paid to nonlinear systems with an informative observation, multimodal systems including Gaussian mixture posterior and maneuvers, and intractable unknown inputs and constraints, to fill some gaps in existing reviews and surveys. In addition, we provide some new thoughts on alternatives to the first-order Markov transition model and on filter evaluation with regard to computing complexity

    A comparison study of distribution-free multivariate SPC methods for multimode data

    Get PDF
    The data-rich environments of industrial applications lead to large amounts of correlated quality characteristics that are monitored using Multivariate Statistical Process Control (MSPC) tools. These variables usually represent heterogeneous quantities that originate from one or multiple sensors and are acquired with different sampling parameters. In this framework, any assumptions relative to the underlying statistical distribution may not be appropriate, and conventional MSPC methods may deliver unacceptable performances. In addition, in many practical applications, the process switches from one operating mode to a different one, leading to a stream of multimode data. Various nonparametric approaches have been proposed for the design of multivariate control charts, but the monitoring of multimode processes remains a challenge for most of them. In this study, we investigate the use of distribution-free MSPC methods based on statistical learning tools. In this work, we compared the kernel distance-based control chart (K-chart) based on a one-class-classification variant of support vector machines and a fuzzy neural network method based on the adaptive resonance theory. The performances of the two methods were evaluated using both Monte Carlo simulations and real industrial data. The simulated scenarios include different types of out-of-control conditions to highlight the advantages and disadvantages of the two methods. Real data acquired during a roll grinding process provide a framework for the assessment of the practical applicability of these methods in multimode industrial applications

    Characterization of an imaging multimode optical fiber using digital micro-mirror device based single-beam system

    Get PDF
    This work demonstrates experimental approaches to characterize a single multimode fiber imaging system without a reference beam. Spatial light modulation is performed with a digital micro-mirror device that enables high-speed binary amplitude modulation. Intensity-only images are recorded by the camera and processed by a Bayesian inference based algorithm to retrieve the phase of the output optical field as well as the transmission matrix of the fiber. The calculated transmission matrix is validated by three standards: prediction accuracy, transmission imaging, and focus generation. Also, it is found that information on mode count and eigenchannels can be extracted from the transmission matrix by singular value decomposition. This paves the way for a more compact and cheaper single multimode fiber imaging system for many demanding imaging tasks

    Coherent Imaging through Multicore Fibres with Applications in Endoscopy

    Get PDF
    Imaging through optical fibres has recently emerged as a promising method of micro-scale optical imaging within a hair-thin form factor. This has significant applications in endoscopy and may enable minimally invasive imaging deep within live tissue for improved diagnosis of disease. Multi-mode fibres (MMF) are the most common choice because of their high resolution but multicore fibres (MCF) offer a number of advantages such as widespread clinical use, ability to form approximate images without correction and an inherently sparse transmission matrix (TM) enabling simple and fast characterisation. We present a novel experimental investigation into properties of MCF important for imaging, specifically: a new method to upsample and downsample measured TMs with minimal information loss, the first experimental measurement of MCF spatial eigenmodes, a novel statistical treatment of behaviour under bending based on a wireless fading model, and an experimental observation of TM drift due to self-heating effects and discussion of how to compensate this. We next present practical techniques for imaging through MCFs, including alignment, how to parallelise TM characterisation measurements to improve speed and how to use non-interferometric phase and polarisation recovery for improved stability. Finally, we present two recent applications of MCF imaging: polarimetric imaging using a robust Bayesian inference approach, and entropic imaging for imaging early-stage tumours

    Fault classification in dynamic processes using multiclass relevance vector machine and slow feature analysis

    Get PDF
    This paper proposes a modifed relevance vector machine with slow feature analysis fault classification for industrial processes. Traditional support vector machine classification does not work well when there are insufficient training samples. A relevance vector machine, which is a Bayesian learning-based probabilistic sparse model, is developed to determine the probabilistic prediction and sparse solutions for the fault category. This approach has the benefits of good generalization ability and robustness to small training samples. To maximize the dynamic separability between classes and reduce the computational complexity, slow feature analysis is used to extract the inner dynamic features and reduce the dimension. Experiments comparing the proposed method, relevance vector machine and support vector machine classification are performed using the Tennessee Eastman process. For all faults, relevance vector machine has a classification rate of 39%, while the proposed algorithm has an overall classification rate of 76.1%. This shows the efficiency and advantages of the proposed method

    Statistical process monitoring of a multiphase flow facility

    Get PDF
    Industrial needs are evolving fast towards more flexible manufacture schemes. As a consequence, it is often required to adapt the plant production to the demand, which can be volatile depending on the application. This is why it is important to develop tools that can monitor the condition of the process working under varying operational conditions. Canonical Variate Analysis (CVA) is a multivariate data driven methodology which has been demonstrated to be superior to other methods, particularly under dynamically changing operational conditions. These comparative studies normally use computer simulated data in benchmark case studies such as the Tennessee Eastman Process Plant (Ricker, N.L. Tennessee Eastman Challenge Archive, Available at 〈http://depts.washington.edu/control/LARRY/TE/download.html〉 Accessed 21.03.2014). The aim of this work is to provide a benchmark case to demonstrate the ability of different monitoring techniques to detect and diagnose artificially seeded faults in an industrial scale multiphase flow experimental rig. The changing operational conditions, the size and complexity of the test rig make this case study an ideal candidate for a benchmark case that provides a test bed for the evaluation of novel multivariate process monitoring techniques performance using real experimental data. In this paper, the capabilities of CVA to detect and diagnose faults in a real system working under changing operating conditions are assessed and compared with other methodologies. The results obtained demonstrate that CVA can be effectively applied for the detection and diagnosis of faults in real complex systems, and reinforce the idea that the performance of CVA is superior to other algorithms

    Una nueva capa de protección a través de súper alarmas con capacidad de diagnóstico

    Get PDF
    An alarm management methodology can be proposed as a discrete event sequence recognition problem where time patterns are used to identify the process safe condition, especially in the start-up and shutdown stages. Industrial plants, particularly in the petrochemical, energy, and chemical sectors, require a combined approach of all the events that can result in a catastrophic accident. This document introduces a new layer of protection (super-alarm) for industrial processes based on a diagnostic stage. Alarms and actions of the standard operating procedure are considered discrete events involved in sequences, where the diagnostic stage corresponds to the recognition of a special situation when these sequences occur. This is meant to provide operators with pertinent information regarding the normal or abnormal situations induced by the flow of alarms. Chronicles Based Alarm Management (CBAM) is the methodology used to build the chronicles that will permit to generate the super-alarms furthermore, a case study of the petrochemical sector using CBAM is presented to build the chronicles of the normal startup, abnormal start-up, and normal shutdown scenarios. Finally, the scenario validation is performed for an abnormal start-up, showing how a super-alarm is generated.Se puede formular una metodología de gestión de alarmas como un problema de reconocimiento de secuencia de eventos discretos en el que se utilizan patrones de tiempo para identificar la condición segura del proceso, especialmente en las etapas de arranque y parada de planta. Las plantas industriales, particularmente en las industrias petroquímica, energética y química, requieren una administración combinada de todos los eventos que pueden producir un accidente catastrófico. En este documento, se introduce una nueva capa de protección (súper alarma) a los procesos industriales basados en una etapa de diagnóstico. Las alarmas y las acciones estándar del procedimiento operativo son asumidas como eventos discretos involucrados en las secuencias, luego la etapa de diagnóstico corresponde al reconocimiento de la situación cuando ocurren estas secuencias. Esto proporciona a los operadores información pertinente sobre las situaciones normales o anormales inducidas por el flujo de alarmas. La gestión de alarmas basadas en crónicas (CBAM) es la metodología utilizada en este artículo para construir las crónicas que permitirán generar las super alarmas, además, se presenta un caso de estudio del sector petroquímico que usa CBAM para construir las crónicas de los escenarios de un arranque normal, un arranque anormal y un apagado normal. Finalmente, la validación del escenario se realiza para un arranque anormal, mostrando cómo se genera una súper alarma
    • …
    corecore