1,205 research outputs found
A Novel Data-Driven Fault Tree Methodology for Fault Diagnosis and Prognosis
RÉSUMÉ : La thèse développe une nouvelle méthodologie de diagnostic et de pronostic de défauts dans un système complexe, nommée Interpretable logic tree analysis (ILTA), qui combine les techniques d’extraction de connaissances à partir des bases de données « knowledge discovery in database (KDD) » et l’analyse d’arbre de défaut « fault tree analysis (FTA) ». La méthodologie capitalise les avantages des deux techniques pour appréhender la problématique de diagnostic et de pronostic de défauts. Bien que les arbres de défauts offrent des modèles interprétables pour déterminer les causes possibles à l’origine d’un défaut, leur utilisation pour le diagnostic de défauts dans un système industriel est limitée, en raison de la nécessité de faire appel à des connaissances expertes pour décrire les relations de cause-à-effet entre les processus internes du système. Cependant, il sera intéressant d’exploiter la puissance d’analyse des arbres de défaut mais construit à partir des connaissances explicites et non biaisées extraites directement des bases de données sur la causalité des fautes. Par conséquent, la méthodologie ILTA fonctionne de manière analogue à la logique du modèle d'analyse d'arbre de défaut (FTA) mais avec une implication minimale des experts. Cette approche de modélisation doit rejoindre la logique des experts pour représenter la structure hiérarchique des défauts dans un système complexe. La méthodologie ILTA est appliquée à la gestion des risques de défaillance en fournissant deux modèles d'arborescence avancés interprétables à plusieurs niveaux (MILTA) et au cours du temps (ITCA). Le modèle MILTA est conçu pour accomplir la tâche de diagnostic de défaillance dans les systèmes complexes. Il est capable de décomposer un défaut complexe et de modéliser graphiquement sa structure de causalité dans un arbre à plusieurs niveaux. Par conséquent, un expert est en mesure de visualiser l’influence des relations hiérarchiques de cause à effet menant à la défaillance principale. De plus, quantifier ces causes en attribuant des probabilités aide à comprendre leur contribution dans l’occurrence de la défaillance du système. Le modèle ITCA est conçu pour réaliser la tâche de pronostic de défaillance dans les systèmes complexes. Basé sur une répartition des données au cours du temps, le modèle ITCA capture l’effet du vieillissement du système à travers de l’évolution de la structure de causalité des fautes. Ainsi, il décrit les changements de causalité résultant de la détérioration et du vieillissement au cours de la vie du système.----------ABSTRACT : The thesis develops a new methodology for diagnosis and prognosis of faults in a complex system, called Interpretable logic tree analysis (ILTA), which combines knowledge extraction techniques from knowledge discovery in databases (KDD) and the fault tree analysis (FTA). The methodology combined the advantages of the both techniques for understanding the problem of diagnosis and prognosis of faults. Although fault trees provide interpretable models for determining the possible causes of a fault, its use for fault diagnosis in an industrial system is limited, due to the need for expert knowledge to describe cause-and-effect relationships between internal system processes. However, it will be interesting to exploit the analytical power of fault trees but built from explicit and unbiased knowledge extracted directly from databases on the causality of faults. Therefore, the ILTA methodology works analogously to the logic of the fault tree analysis model (FTA) but with minimal involvement of experts. This modeling approach joins the logic of experts to represent the hierarchical structure of faults in a complex system. The ILTA methodology is applied to failure risk management by providing two interpretable advanced logic models: a multi-level tree (MILTA) and a multilevel tree over time (ITCA). The MILTA model is designed to accomplish the task of diagnosing failure in complex systems. It is able to decompose a complex defect and graphically model its causal structure in a tree on several levels. As a result, an expert is able to visualize the influence of hierarchical cause and effect relationships leading to the main failure. In addition, quantifying these causes by assigning probabilities helps to understand their contribution to the occurrence of system failure. The second model is a logical tree interpretable in time (ITCA), designed to perform the task of prognosis of failure in complex systems. Based on a distribution of data over time, the ITCA model captures the effect of the aging of the system through the evolution of the fault causation structure. Thus, it describes the causal changes resulting from deterioration and aging over the life of the system
Recent advances in the theory and practice of logical analysis of data
Logical Analysis of Data (LAD) is a data analysis methodology introduced by Peter L. Hammer in 1986. LAD distinguishes itself from other classification and machine learning methods by the fact that it analyzes a significant subset of combinations of variables to describe the positive or negative nature of an observation and uses combinatorial techniques to extract models defined in terms of patterns. In recent years, the methodology has tremendously advanced through numerous theoretical developments and practical applications. In the present paper, we review the methodology and its recent advances, describe novel applications in engineering, finance, health care, and algorithmic techniques for some stochastic optimization problems, and provide a comparative description of LAD with well-known classification methods
Fault Detection and Isolation in Industrial Processes Using Deep Learning Approaches
Automated fault detection is an important part of a quality control system. It has the potential to increase the overall quality of monitored products and processes. The fault detection of automotive instrument cluster systems in computer- based manufacturing assembly lines is currently limited to simple boundary checking. The analysis of more complex non-linear signals is performed manually by trained operators, whose knowledge is used to supervise quality checking and manual detection of faults. In this paper, a novel approach for automated fault detection and isolation based on deep machine learning techniques is presented. The approach was tested on data generated by computer-based manufacturing systems equipped with local and remote sensing devices. The results show that the proposed approach models the different spatial / temporal patterns found in the data. The approach is also able to successfully diagnose and locate multiple classes of faults under real-time working conditions. The proposed method is shown to outperform other established fault detection and isolation methods
Cloud computing based unsupervised fault diagnosis system in the context of Industry 4.0
ABSTRACT: New online fault monitoring and alarm systems, with the aid of Cyber-Physical Systems (CPS) and Cloud Technology (CT), are examined in this article within the context of Industry 4.0. The data collected from machines is used to implement maintenance strategies based on the diagnosis and prognosis of the machines' performance. As such, the purpose of this paper is to propose a Cloud Computing Platform containing three layers of technologies forming a Cyber-Physical System which receives unlabelled data to generate an interpreted online decision for the local team, as well as collecting historical data to improve the analyzer. The proposed troubleshooter is tested using unlabelled experimental data sets of rolling element bearing. Finally, the current and future Fault Diagnosis Systems and Cloud Technologies applications in the maintenance field are discussed
Developing of Ultrasound Experimental Methods using Machine Learning Algorithms for Application of Temperature Monitoring of Nano-Bio-Composites Extrusion
In industry fiber degradation during processing of biocomposite in the extruder is a problem that requires a reliable solution to save time and money wasted on producing damaged material. In this thesis, We try to focus on a practical solution that can monitor the change in temperature that causes fiber degradation and material damage to stop it when it occurs. Ultrasound can be used to detect the temperature change inside the material during the process of material extrusion. A monitoring approach for the extruder process has been developed using ultrasound system and the techniques of machine learning algorithms. A measurement cell was built to form a dataset of ultrasound signals at different temperatures for analysis. Machine learning algorithms were applied through machine-learning algorithm’s platform to classify the dataset based on the temperature. The dataset was classified with accuracy 97% into two categories representing over and below damage temperature (190oc) ultrasound signal. This approach could be used in industry to send an alarm or a temperature control signal when material damage is detected. Biocomposite is at the core of automotive industry material research and development concentration. Melt mixing process was used to mix biocomposite material with multi-walled carbon nanotubes (MWCNTs) for the purpose of enhancing mechanical and thermal properties of biocomposite. The resulting composite nano-bio- composite was tested via different types of thermal and mechanical tests to evaluate its performance relative to biocomposite. The developed material showed enhancement in mechanical and thermal properties that considered a high potential for applications in the future
Roadmap on signal processing for next generation measurement systems
Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System
Alarm flood reduction using multiple data sources
The introduction of distributed control systems in the process industry has increased the number of alarms per operator exponentially. Modern plants present a high level of interconnectivity due to steam recirculation, heat integration and the complex control systems installed in the plant. When there is a disturbance in the plant it spreads through its material, energy and information connections affecting the process variables on the path. The alarms associated to these process variables are triggered. The alarm messages may overload the operator in the control room, who will not be able to properly investigate each one of these alarms. This undesired situation is called an “alarm flood”. In such situations the operator might not be able to keep the plant within safe operation. The aim of this thesis is to reduce alarm flood periods in process plants. Consequential alarms coming from the same process abnormality are isolated and a causal alarm suggestion is given. The causal alarm in an alarm flood is the alarm associated to the asset originating the disturbance that caused the flood. Multiple information sources are used: an alarm log containing all past alarms messages, process data and a topology model of the plant. The alarm flood reduction is achieved with a combination of alarm log analysis, process data root-cause analysis and connectivity analysis. The research findings are implemented in a software tool that guides the user through the different steps of the method. Finally the applicability of the method is proved with an industrial case study
Machine Learning for Cyber Physical Systems
This open access proceedings presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains selected papers from the fifth international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Berlin, March 12-13, 2020. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments
- …