264 research outputs found

    A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring

    Get PDF
    Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries

    Explainable AI for Machine Fault Diagnosis: Understanding Features' Contribution in Machine Learning Models for Industrial Condition Monitoring

    Get PDF
    Although the effectiveness of machine learning (ML) for machine diagnosis has been widely established, the interpretation of the diagnosis outcomes is still an open issue. Machine learning models behave as black boxes; therefore, the contribution given by each of the selected features to the diagnosis is not transparent to the user. This work is aimed at investigating the capabilities of the SHapley Additive exPlanation (SHAP) to identify the most important features for fault detection and classification in condition monitoring programs for rotating machinery. The authors analyse the case of medium-sized bearings of industrial interest. Namely, vibration data were collected for different health states from the test rig for industrial bearings available at the Mechanical Engineering Laboratory of Politecnico di Torino. The Support Vector Machine (SVM) and k-Nearest Neighbour (kNN) diagnosis models are explained by means of the SHAP. Accuracies higher than 98.5% are achieved for both the models using the SHAP as a criterion for feature selection. It is found that the skewness and the shape factor of the vibration signal have the greatest impact on the models’ outcomes

    On Data Depth and the Application of Nonparametric Multivariate Statistical Process Control Charts

    Get PDF
    The purpose of this article is to summarize recent research results for constructing nonparametric multivariate control charts with main focus on data depth based control charts. Data depth provides data reduction to large-variable problems in a completely nonparametric way. Several depth measures including Tukey depth are shown to be particularly effective for purposes of statistical process control in case that the data deviates normality assumption. For detecting slow or moderate shifts in the process target mean, the multivariate version of the EWMA is generally robust to non-normal data, so that nonparametric alternatives may be less often required

    Multimode Process Monitoring Based on Sparse Principal Component Selection and Bayesian Inference-Based Probability

    Get PDF
    According to the demand for diversified products, modern industrial processes typically have multiple operating modes. At the same time, variables within the same mode often follow a mixture of Gaussian distributions. In this paper, a novel algorithm based on sparse principal component selection (SPCS) and Bayesian inference-based probability (BIP) is proposed for multimode process monitoring. SPCS can be formulated as a just-in-time regression between all PCs and each sample. SPCS selects PCs according to the nonzero regression coefficients which indicate the compact expression of the sample. This expression is necessarily discriminative: amongst all subset of PCs, SPCS selects the PCs which most compactly express the sample and rejects all other possible but less compact expressions. BIP is utilized to compute the posterior probabilities of each monitored sample belonging to the multiple components and derive an integrated global probabilistic index for fault detection of multimode processes. Finally, to verify its superiority, the SPCS-BIP algorithm is applied to the Tennessee Eastman (TE) benchmark process and a continuous stirred-tank reactor (CSTR) process

    Constructing bi-plots for random forest:Tutorial

    Get PDF
    Current technological developments have allowed for a significant increase and availability of data. Consequently, this has opened enormous opportunities for the machine learning and data science field, translating into the development of new algorithms in a wide range of applications in medical, biomedical, daily-life, and national security areas. Ensemble techniques are among the pillars of the machine learning field, and they can be defined as approaches in which multiple, complex, independent/uncorrelated, predictive models are subsequently combined by either averaging or voting to yield a higher model performance. Random forest (RF), a popular ensemble method, has been successfully applied in various domains due to its ability to build predictive models with high certainty and little necessity of model optimization. RF provides both a predictive model and an estimation of the variable importance. However, the estimation of the variable importance is based on thousands of trees, and therefore, it does not specify which variable is important for which sample group.The present study demonstrates an approach based on the pseudo-sample principle that allows for construction of bi-plots (i.e. spin plots) associated with RF models. The pseudo-sample principle for RF. is explained and demonstrated by using two simulated datasets, and three different types of real data, which include political sciences, food chemistry and the human microbiome data. The pseudo-sample bi plots, associated with RF and its unsupervised version, allow for a versatile visualization of multivariate models, and the variable importance and the relation among them. (c) 2020 Elsevier B.V. All rights reserved.</p

    Digital Image-Based Frameworks for Monitoring and Controlling of Particulate Systems

    Get PDF
    Particulate processes have been widely involved in various industries and most products in the chemical industry today are manufactured as particulates. Previous research and practise illustrate that the final product quality can be influenced by particle properties such as size and shape which are related to operating conditions. Online characterization of these particles is an important step for maintaining desired product quality in particulate processes. Image-based characterization method for the purpose of monitoring and control particulate processes is very promising and attractive. The development of a digital image-based framework, in the context of this research, can be envisioned in two parts. One is performing image analysis and designing advanced algorithms for segmentation and texture analysis. The other is formulating and implementing modern predictive tools to establish the correlations between the texture features and the particle characteristics. According to the extent of touching and overlapping between particles in images, two image analysis methods were developed and tested. For slight touching problems, image segmentation algorithms were developed by introducing Wavelet Transform de-noising and Fuzzy C-means Clustering detecting the touching regions, and by adopting the intensity and geometry characteristics of touching areas. Since individual particles can be identified through image segmentation, particle number, particle equivalent diameter, and size distribution were used as the features. For severe touching and overlapping problems, texture analysis was carried out through the estimation of wavelet energy signature and fractal dimension based on wavelet decomposition on the objects. Predictive models for monitoring and control for particulate processes were formulated and implemented. Building on the feature extraction properties of the wavelet decomposition, a projection technique such as principal component analysis (PCA) was used to detect off-specification conditions which generate particle mean size deviates the target value. Furthermore, linear and nonlinear predictive models based on partial least squares (PLS) and artificial neural networks (ANN) were formulated, implemented and tested on an experimental facility to predict particle characteristics (mean size and standard deviation) from the image texture analysis

    Hyperspectral Imaging and Their Applications in the Nondestructive Quality Assessment of Fruits and Vegetables

    Get PDF
    Over the past decade, hyperspectral imaging has been rapidly developing and widely used as an emerging scientific tool in nondestructive fruit and vegetable quality assessment. Hyperspectral imaging technique integrates both the imaging and spectroscopic techniques into one system, and it can acquire a set of monochromatic images at almost continuous hundreds of thousands of wavelengths. Many researches based on spatial image and/or spectral image processing and analysis have been published proposing the use of hyperspectral imaging technique in the field of quality assessment of fruits and vegetables. This chapter presents a detailed overview of the introduction, latest developments and applications of hyperspectral imaging in the nondestructive assessment of fruits and vegetables. Additionally, the principal components, basic theories, and corresponding processing and analytical methods are also reported in this chapter

    A Review of Bayesian Methods in Electronic Design Automation

    Full text link
    The utilization of Bayesian methods has been widely acknowledged as a viable solution for tackling various challenges in electronic integrated circuit (IC) design under stochastic process variation, including circuit performance modeling, yield/failure rate estimation, and circuit optimization. As the post-Moore era brings about new technologies (such as silicon photonics and quantum circuits), many of the associated issues there are similar to those encountered in electronic IC design and can be addressed using Bayesian methods. Motivated by this observation, we present a comprehensive review of Bayesian methods in electronic design automation (EDA). By doing so, we hope to equip researchers and designers with the ability to apply Bayesian methods in solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which can be sent to [email protected]
    • …
    corecore