18 research outputs found
Industrial Process Monitoring in the Big Data/Industry 4.0 Era: from Detection, to Diagnosis, to Prognosis
We provide a critical outlook of the evolution of Industrial Process Monitoring (IPM) since
its introduction almost 100 years ago. Several evolution trends that have been structuring IPM
developments over this extended period of time are briefly referred, with more focus on data-driven
approaches. We also argue that, besides such trends, the research focus has also evolved. The initial
period was centred on optimizing IPM detection performance. More recently, root cause analysis and
diagnosis gained importance and a variety of approaches were proposed to expand IPM with this
new and important monitoring dimension. We believe that, in the future, the emphasis will be to
bring yet another dimension to IPM: prognosis. Some perspectives are put forward in this regard,
including the strong interplay of the Process and Maintenance departments, hitherto managed as
separated silos
An extensive reference dataset for fault detection and identification in batch processes
Close process monitoring (i.e., detection and identification of disturbances) is important to achieve high process efficiency and safety. The Tennessee Eastman process is an extensive benchmark dataset for fault detection and identification, but it is only representative for continuous processes because it does not contain the inherent non- stationarity that complicates monitoring of batch processes. Nevertheless, batch processes also play an important role in many types of industry. This paper therefore presents an extensive reference dataset for benchmarking data-driven methodologies for fault detection and identification in batch processes.
The original Pensim model [10] is expanded with sensor noise. By changing the properties of the initial conditions and/or model parameters, four subsets of different complexity are generated, each containing 400 batches with normal operation. To correctly assess the fault detection and identification in batch processes, 15 faults are simulated with various amplitudes and onset times for a total of 22,200 faulty batches for each subset, or 90,400 batches in total.
Analysis of the data indicates that the presented types of process faults and their various amplitudes in each of the four subsets present a suitable benchmark for fault detection and identification in batch processes. The dataset is freely available at http://cit.kuleuven.be/biotec/batchbenchmark.status: publishe
Quality assessment of a variance estimator for Partial Least Squares prediction of batch-end quality
This paper studies batch-end quality prediction using Partial Least Squares (PLS). The applicability of the zeroth-order approximation of Faber and Kowalski (1997) for estimation of the PLS prediction variance is critically assessed. The estimator was originally developed for spectroscopy calibration and its derivation involves a local linearization under specific assumptions, followed by a further approximation. Although the assumptions do not hold for batch process monitoring in general, they are not violated for the selected case study. Based on extensive Monte Carlo simulations, the influence of noise variance, number of components and number of training batches on the bias and variability of the variance estimation is investigated. The results indicate that the zeroth-order approximation is too restrictive for batch process data. The development of a variance estimator based on a full local linearization is required to obtain more reliable variance estimations for the development of prediction intervals. © 2013.status: publishe
Bioflocculation and Activated Sludge Separation: A PLS Case Study
© 2016 Sedimentation and filtration are the most common techniques for activated sludge separation in wastewater treatment plants. Using partial least squares (PLS), the influence of bioflocculation related variables on removal effiency was assessed. Small particles and dissolved polysaccharides are deemed detrimental for filtration, while hydrophobic large flocs improve the filtration performance. Settling worsens when filaments are present and improves with the presence of large flocs. The potential of using PLS is demonstrated, although more measurements and samples of a wide diversity would improve the modeling performance. Such models can then pinpoint crucial measurements for bioflocculation monitoring in relation to separation performance in wastewater treatment plants.status: publishe
Hybrid derivative dynamic time warping for online industrial batch-end quality estimation
This paper discusses the design of an inferential sensor for the online prediction of the end-quality of an industrial batch polymerization process. Owing to unequal batch speeds, measurement profiles must be synchronized before modeling. This makes profile alignment an integral part of any inferential sensor. In this work, a novel online hybrid derivative dynamic time warping data alignment technique is presented. The proposed technique allows for automatic adjustment of the warping resolution to achieve optimal alignment results for both slowly and rapidly varying parts of the measurement profiles. The proposed online data alignment technique is combined with a multiway partial least-squares black box model to yield online predictions of the final quality of a running batch process. It is demonstrated that this inferential sensor is capable of accurately predicting the quality online for an industrial polymerization process, even when the production process is only halfway, that is, well before lab measurements become available. As a result of this early warning, batches violating the quality specifications can be corrected or even stopped. This leads to fewer off-spec batches, saves production time, lowers operational costs, and reduces waste material and energy. © 2012 American Chemical Society.status: publishe
The RAYMOND simulation package — Generating RAYpresentative MONitoring Data to design advanced process monitoring and control algorithms
This work presents the RAYMOND simulation package for generating RAYpresentative MONitoring Data. RAYMOND is a free MATLAB package and can simulate a wide range of processes; a number of widely-used benchmark processes are available, but user-defined processes can easily be added. Its modular design results in large flexibility with respect to the simulated processes: input fluctuations resulting from upstream variability can be introduced, sensor properties (measurement noise, resolution, range, etc.) can be freely specified, and various (custom) control strategies can be implemented. Furthermore, process variability (biological variability or non-ideal behavior) can be included, as can process-specific disturbances.
In two case studies, the importance of including non-ideal behavior for monitoring and control of batch processes is illustrated. Hence, it should be included in benchmarks to better assess the performance and robustness of advanced process monitoring and control algorithms.status: publishe
Analysis of smearing-out in contribution plot based fault isolation for Statistical Process Control
This paper studies the smearing effect encountered in contribution plot based fault isolation, i.e., the influence of faulty variables on the contributions of non-faulty variables. Since the generation of contribution plots requires no a priori information about the detected disturbance (e.g., historical faulty data), it is a popular fault isolation technique in Statistical Process Control (SPC). However, Westerhuis et al. (2000) demonstrated that contributions suffer from fault smearing. As a consequence, variables unaffected by the fault may be highlighted and faulty variables obscured during the contribution analysis. This paper presents a thorough analysis of the smearing effect for three general contribution computation methods: complete decomposition, partial decomposition and reconstruction-based contributions. The analysis shows that (i) smearing is present in all three methods, (ii) smearing depends on the chosen number of principal components of the underlying PCA or PLS model and (iii) the extent of smearing increases for variables correlated in the training data for a well-chosen model order. The effect of smearing on the isolation performance of single and multiple sensor faults of various magnitudes is studied and illustrated using a simulation case study. The results indicate that correct isolation with contribution plots is not guaranteed for multiple sensor faults. Furthermore, contribution plots only outperform univariate fault isolation for single sensor faults with small magnitudes. For multiple sensor faults, univariate fault isolation exhibits a significantly larger correct fault isolation rate. Based on the smearing analysis and the specific results for sensor faults, the authors advise to use contributions only if a sound physical interpretation of the principal components is available. Otherwise multivariate detection followed by univariate fault isolation is recommended. © 2013.status: publishe
Improving classification-based diagnosis of batch processes through data selection and appropriate pretreatment
This work considers the application of classification algorithms for data-driven fault diagnosis of batch processes. A novel data selection methodology is proposed which enables online classification of detected disturbances without requiring the estimation of unknown (future) process behavior, as is the case in previously reported approaches.
The proposed method is benchmarked in two case studies using the Pensim process model of Birol et al. (2002) implemented in RAYMOND. Both a simple k Nearest Neighbors (k-NN) and complex Least Squares Support Vector Machine (LS-SVM) are employed for classification to demonstrate the generic nature of the proposed approach. In addition, the influence of different data pretreatment methods on the classification performance is discussed, together with a motivation for selecting the correct pretreatment steps. Finally, the influence of the number of available training batches is studied.
The results demonstrate that a good classification performance can be achieved with the proposed data selection method even with a low number of faulty training batches by exploiting knowledge on the nature of the to-be-diagnosed faults in the data pretreatment. This provides a proof of concept for classification-based batch diagnosis and demonstrates the importance of incorporating process insight in the construction of data-driven process monitoring and diagnosis tools.status: publishe