31 research outputs found

    Novel deep cross-domain framework for fault diagnosis or rotary machinery in prognostics and health management

    Get PDF
    Improving the reliability of engineered systems is a crucial problem in many applications in various engineering fields, such as aerospace, nuclear energy, and water declination industries. This requires efficient and effective system health monitoring methods, including processing and analyzing massive machinery data to detect anomalies and performing diagnosis and prognosis. In recent years, deep learning has been a fast-growing field and has shown promising results for Prognostics and Health Management (PHM) in interpreting condition monitoring signals such as vibration, acoustic emission, and pressure due to its capacity to mine complex representations from raw data. This doctoral research provides a systematic review of state-of-the-art deep learning-based PHM frameworks, an empirical analysis on bearing fault diagnosis benchmarks, and a novel multi-source domain adaptation framework. It emphasizes the most recent trends within the field and presents the benefits and potentials of state-of-the-art deep neural networks for system health management. Besides, the limitations and challenges of the existing technologies are discussed, which leads to opportunities for future research. The empirical study of the benchmarks highlights the evaluation results of the existing models on bearing fault diagnosis benchmark datasets in terms of various performance metrics such as accuracy and training time. The result of the study is very important for comparing or testing new models. A novel multi-source domain adaptation framework for fault diagnosis of rotary machinery is also proposed, which aligns the domains in both feature-level and task-level. The proposed framework transfers the knowledge from multiple labeled source domains into a single unlabeled target domain by reducing the feature distribution discrepancy between the target domain and each source domain. Besides, the model can be easily reduced to a single-source domain adaptation problem. Also, the model can be readily updated to unsupervised domain adaptation problems in other fields such as image classification and image segmentation. Further, the proposed model is modified with a novel conditional weighting mechanism that aligns the class-conditional probability of the domains and reduces the effect of irrelevant source domain which is a critical issue in multi-source domain adaptation algorithms. The experimental verification results show the superiority of the proposed framework over state-of-the-art multi-source domain-adaptation models

    Analysis of explainable artificial intelligence on time series data

    Get PDF
    In recent years, the interest in Artificial Intelligence (AI) has experienced a significant growth, which has contributed to the emergence of new research directions such as Explainable Artificial Intelligence (XAI). The ability to apply AI approaches to solve various problems in many industrial areas has been mainly achieved by increasing model complexity and the use of various black-box models that lack transparency. In particular, deep neural networks are great at dealing with problems that are too difficult for classic machine learning methods, but it is often a big challenge to answer the question why a neural network made such a decision and not another. The answer to this question is extremely important to ensure that ML models are reliable and can be held liable for the decision-making process. Over a relatively short period of time a plethora of methods to tackle this problem have been proposed, but mainly in the area of computer vision and natural language processing. Few publications have been published so far in the context of explainability in time series. This Thesis aims to provide a comprehensive literature review of the research in XAI for time series data as well as to achieve and evaluate local explainability for a model in time series forecasting problem. The solution involved framing a time series forecasting task as a Remaining Useful Life (RUL) prognosis for turbofan engines. We trained two Bi-LSTM models, with and without attention layer, on the C-MAPSS data set. The local explainability was achieved using two post-hoc explainability techniques SHAP and LIME as well as extracting and interpreting the attention weights. The results of explanations were compared and evaluated. We applied the evaluation metric which incorporates the temporal dimension of the data. The obtained results indicate that LIME technique outperforms other methods in terms of the fidelity of local explanations. Moreover, we demonstrated the potential of attention mechanisms to make a deep learning model for time series forecasting task more interpretable. The approach presented in this work can be easily applied to any time series forecasting or classification scenario in which we want to achieve model interpretability and evaluation of generated explanations

    Predictive Maintenance of Wind Generators based on AI Techniques

    Get PDF
    As global warming is slowly becoming a dangerous reality, governments and private institutions are introducing policies to minimize it. Those policies have led to the development and deployment of Renewable Energy Sources (RESs), which introduces new challenges, among which the minimization of downtime and Levelised Cost of Energy (LCOE) by optimizing maintenance strategy where early detection of incipient faults is of significant intent. Hence, this is the focus of this thesis. While there are several maintenance approaches, predictive maintenance can utilize SCADA readings from large scale power plants to detect early signs of failures, which can be characterized by abnormal patterns in the measurements. There exists several approaches to detect these patterns such as model-based or hybrid techniques, but these require the detailed knowledge of the analyzed system. As SCADA system collects large amounts of data, machine learning techniques can be used to detect the underlying failure patterns and notify customers of the abnormal behaviour. In this work, a novel framework based on machine learning techniques for fault prediction of wind farm generators is developed for an actual customer. The proposed fault prognosis methodology addresses data limitation such as class imbalance and missing data, performs statistical tests on time series to test for its stationarity, selects the features with the most predictive power, and applies machine learning models to predict a fault with 1 hour horizon. The proposed techniques are tested and validated using historical data for a wind farm in Summerside, Prince Edward Island (PEI), Canada, and models are evaluated based on appropriate evaluation metrics. The results demonstrate the ability of the proposed methodology to predict wind generator failures, and the viability of the proposed methodology for optimizing preventive maintenance strategies

    An improved bidirectional gate recurrent unit combined with smoothing flter algorithm for state of energy estimation of lithium-ion batteries.

    Get PDF
    The accurate estimation of state of energy (SOE) is the key to the rational energy distribution of lithium-ion battery based energy storage equipment. This paper proposes an improved bidirectional gate recursive element combined with a time-varying bounded layer based smooth variable structure filtering algorithm. First, based on the solid temporal nature of the estimated parameters, a BiGRU neural network structure is constructed to strengthen further the influence of past and future information on the current estimates. Then, based on the traditional variable structure filtering, a time-varying bounded layer smoothing mechanism with saturation restriction (TS-VBL) is proposed to smooth the output of BiGRU to obtain a more accurate estimate. Finally, the test was conducted under 15℃ hybrid pulse power characterization (HPPC) and 35℃ Beijing bus dynamic stress test (BBDST). Compared with other algorithms, the BiGRU-TSVSF algorithm has a minor maximum estimation error of 0.00495 and 0.00722, respectively. The experimental results show that the algorithm has high precision and robustness and is of great value to the energy storage research of energy storage equipment

    Contribution to intelligent monitoring and failure prognostics of industrial systems.

    Get PDF
    This thesis was conducted within the framework of SMART project funded by a European program, Interreg POCTEFA. The project aims to support small and medium-sized companies to increase their competitiveness in the context of Industry 4.0 by developing intelligent monitoring tools for autonomous system health management. To do so, in this work, we propose efficient data-driven algorithms for prognostics and health management of industrial systems. The first contribution consists of the construction of a new robust health indicator that allows clearly separating different fault states of a wide range of systems’ critical components. This health indicator is also efficient when considering multiples monitoring parameters under various operating conditions. Next, the second contribution addresses the challenges posed by online diagnostics of unknown fault types in dynamic systems, particularly the detection, localization, and identification of the robot axes drifts origin when these drifts have not been learned before. For this purpose, a new online diagnostics methodology based on information fusion from direct and indirect monitoring techniques is proposed. It uses the direct monitoring way to instantaneously update the indirect monitoring model and diagnose online the origin of new faults. Finally, the last contribution deals with the prognostics issue of systems failure in a controlled industrial process that can lead to negative impacts in long-term predictions. To remedy this problem, we developed a new adaptive prognostics approach based on the combination of multiple machine learning predictions in different time horizons. The proposed approach allows capturing the degradation trend in long-term while considering the state changes in short-term caused by the controller activities, which allows improving the accuracy of prognostics results. The performances of the approaches proposed in this thesis were investigated on different real case studies representing the demonstrators of the thesis partners

    Semantics for vision-and-language understanding

    Get PDF
    Recent advancements in Artificial Intelligence have led to several breakthroughs in many heterogeneous scientific fields, such as the prediction of protein structures or self-driving cars. These results are obtained by means of Machine Learning techniques, which make it possible to automatically learn from the available annotated examples a mathematical model capable of solving the task. One of its sub-fields, Deep Learning, brought further improvements by providing the possibility to also compute an informative and non-redundant representation for each example by means of the same learning process. To successfully solve the task under analysis, the model needs to overcome the generalization gap, meaning that it needs to work well both on the training data, and on examples which are drawn from the same distribution but are never observed at training time. Several heuristics are often used to overcome this gap, such as the introduction of inductive biases when modeling the data or the usage of regularization techniques; however, a popular way consists in collecting and annotating more examples hoping they can cover the cases which were not previously observed. In particular, recent state-of-the-art solutions use hundreds of millions or even billions of annotated examples, and the underlying trend seems to imply that the collection and annotation of more and more examples should be the prominent way to overcome the generalization gap. However, there are many fields, e.g. medical fields, in which it is difficult to collect such a large amount of examples, and producing high quality annotations is even more arduous and costly. During my Ph.D. and in this thesis, I designed and proposed several solutions which address the generalization gap in three different domains by leveraging semantic aspects of the available data. In particular, the first part of the thesis includes techniques which create new annotations for the data under analysis: these include data augmentation techniques, which are used to compute variations of the annotations by means of semantics-preserving transformations, and transfer learning, which is used in the scope of this thesis to automatically generate textual descriptions for a set of images. In the second part of the thesis, this gap is reduced by customizing the training objective based on the semantics of the annotations. By means of these customizations, a problem is shifted from the commonly used single-task setting to a multi-task learning setting by designing an additional task, and then two variations of a standard loss function are proposed by introducing semantic knowledge into the training process
    corecore