16,292 research outputs found
Maximizing Model Generalization for Machine Condition Monitoring with Self-Supervised Learning and Federated Learning
Deep Learning (DL) can diagnose faults and assess machine health from raw
condition monitoring data without manually designed statistical features.
However, practical manufacturing applications remain extremely difficult for
existing DL methods. Machine data is often unlabeled and from very few health
conditions (e.g., only normal operating data). Furthermore, models often
encounter shifts in domain as process parameters change and new categories of
faults emerge. Traditional supervised learning may struggle to learn compact,
discriminative representations that generalize to these unseen target domains
since it depends on having plentiful classes to partition the feature space
with decision boundaries. Transfer Learning (TL) with domain adaptation
attempts to adapt these models to unlabeled target domains but assumes similar
underlying structure that may not be present if new faults emerge. This study
proposes focusing on maximizing the feature generality on the source domain and
applying TL via weight transfer to copy the model to the target domain.
Specifically, Self-Supervised Learning (SSL) with Barlow Twins may produce more
discriminative features for monitoring health condition than supervised
learning by focusing on semantic properties of the data. Furthermore, Federated
Learning (FL) for distributed training may also improve generalization by
efficiently expanding the effective size and diversity of training data by
sharing information across multiple client machines. Results show that Barlow
Twins outperforms supervised learning in an unlabeled target domain with
emerging motor faults when the source training data contains very few distinct
categories. Incorporating FL may also provide a slight advantage by diffusing
knowledge of health conditions between machines
Enhanced Industrial Machinery Condition Monitoring Methodology based on Novelty Detection and Multi-Modal Analysis
This paper presents a condition-based monitoring methodology based on novelty detection applied to industrial machinery. The proposed approach includes both, the classical classification of multiple a priori known scenarios, and the innovative detection capability of new operating modes not previously available. The development of condition-based monitoring methodologies considering the isolation capabilities of unexpected scenarios represents, nowadays, a trending topic able to answer the demanding requirements of the future industrial processes monitoring systems. First, the method is based on the temporal segmentation of the available physical magnitudes, and the estimation of a set of time-based statistical features. Then, a double feature reduction stage based on Principal Component Analysis and Linear Discriminant Analysis is applied in order to optimize the classification and novelty detection performances. The posterior combination of a Feed-forward Neural Network and One-Class Support Vector Machine allows the proper interpretation of known and unknown operating conditions. The effectiveness of this novel condition monitoring scheme has been verified by experimental results obtained from an automotive industry machine.Postprint (published version
Scalable and reliable deep transfer learning for intelligent fault detection via multi-scale neural processes embedded with knowledge
Deep transfer learning (DTL) is a fundamental method in the field of
Intelligent Fault Detection (IFD). It aims to mitigate the degradation of
method performance that arises from the discrepancies in data distribution
between training set (source domain) and testing set (target domain).
Considering the fact that fault data collection is challenging and certain
faults are scarce, DTL-based methods face the limitation of available
observable data, which reduces the detection performance of the methods in the
target domain. Furthermore, DTL-based methods lack comprehensive uncertainty
analysis that is essential for building reliable IFD systems. To address the
aforementioned problems, this paper proposes a novel DTL-based method known as
Neural Processes-based deep transfer learning with graph convolution network
(GTNP). Feature-based transfer strategy of GTNP bridges the data distribution
discrepancies of source domain and target domain in high-dimensional space.
Both the joint modeling based on global and local latent variables and sparse
sampling strategy reduce the demand of observable data in the target domain.
The multi-scale uncertainty analysis is obtained by using the distribution
characteristics of global and local latent variables. Global analysis of
uncertainty enables GTNP to provide quantitative values that reflect the
complexity of methods and the difficulty of tasks. Local analysis of
uncertainty allows GTNP to model uncertainty (confidence of the fault detection
result) at each sample affected by noise and bias. The validation of the
proposed method is conducted across 3 IFD tasks, consistently showing the
superior detection performance of GTNP compared to the other DTL-based methods
- …