1,563 research outputs found
Explainable Predictive Maintenance
Explainable Artificial Intelligence (XAI) fills the role of a critical
interface fostering interactions between sophisticated intelligent systems and
diverse individuals, including data scientists, domain experts, end-users, and
more. It aids in deciphering the intricate internal mechanisms of ``black box''
Machine Learning (ML), rendering the reasons behind their decisions more
understandable. However, current research in XAI primarily focuses on two
aspects; ways to facilitate user trust, or to debug and refine the ML model.
The majority of it falls short of recognising the diverse types of explanations
needed in broader contexts, as different users and varied application areas
necessitate solutions tailored to their specific needs.
One such domain is Predictive Maintenance (PdM), an exploding area of
research under the Industry 4.0 \& 5.0 umbrella. This position paper highlights
the gap between existing XAI methodologies and the specific requirements for
explanations within industrial applications, particularly the Predictive
Maintenance field. Despite explainability's crucial role, this subject remains
a relatively under-explored area, making this paper a pioneering attempt to
bring relevant challenges to the research community's attention. We provide an
overview of predictive maintenance tasks and accentuate the need and varying
purposes for corresponding explanations. We then list and describe XAI
techniques commonly employed in the literature, discussing their suitability
for PdM tasks. Finally, to make the ideas and claims more concrete, we
demonstrate XAI applied in four specific industrial use cases: commercial
vehicles, metro trains, steel plants, and wind farms, spotlighting areas
requiring further research.Comment: 51 pages, 9 figure
A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring
Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries
Data mining for fault diagnosis in steel making process under industry 4.0
The concept of Industry 4.0 (I4.0) refers to the intelligent networking of machines and
processes in the industry, which is enabled by cyber-physical systems (CPS) - a
technology that utilises embedded networked systems to achieve intelligent control.
CPS enable full traceability of production processes as well as comprehensive data
assignments in real-time. Through real-time communication and coordination between
"manufacturing things", production systems, in the form of Cyber-Physical Production
Systems (CPPS), can make intelligent decisions. Meanwhile, with the advent of I4.0,
it is possible to collect heterogeneous manufacturing data across various facets for
fault diagnosis by using the industrial internet of things (IIoT) techniques. Under this
data-rich environment, the ability to diagnose and predict production failures provides
manufacturing companies with a strategic advantage by reducing the number of
unplanned production outages. This advantage is particularly desired for steel-making
industries. As a consecutive and compact manufacturing process, process downtime is
a major concern for steel-making companies since most of the operations should be
conducted within a certain temperature range. In addition, steel-making consists of
complex processes that involve physical, chemical, and mechanical elements,
emphasising the necessity for data-driven approaches to handle high-dimensionality
problems.
For a modern steel-making plant, various measurement devices are deployed
throughout this manufacturing process with the advancement of I4.0 technologies,
which facilitate data acquisition and storage. However, even though data-driven
approaches are showing merits and being widely applied in the manufacturing context,
how to build a deep learning model for fault prediction in the steel-making process
considering multiple contributing facets and its temporal characteristic has not been
investigated. Additionally, apart from the multitudinous data, it is also worthwhile to
study how to represent and utilise the vast and scattered distributed domain knowledge
along the steel-making process for fault modelling. Moreover, state-of-the-art does not
iv Abstract
address how such accumulated domain knowledge and its semantics can be harnessed
to facilitate the fusion of multi-sourced data in steel manufacturing. In this case, the
purpose of this thesis is to pave the way for fault diagnosis in steel-making processes
using data mining under I4.0.
This research is structured according to four themes. Firstly, different from the
conventional data-driven research that only focuses on modelling based on numerical
production data, a framework for data mining for fault diagnosis in steel-making based
on multi-sourced data and knowledge is proposed. There are five layers designed in
this framework, which are multi-sourced data and knowledge acquisition, data and
knowledge processing, KG construction and graphical data transformation, KG-aided
modelling for fault diagnosis and decision support for steel manufacturing.
Secondly, another of the purposes of this thesis is to propose a predictive, data-driven
approach to model severe faults in the steel-making process, where the faults are
usually with multi-faceted causes. Specifically, strip breakage in cold rolling is
selected as the modelling target since it is a typical production failure with serious
consequences and multitudinous factors contributing to it. In actual steel-making
practice, if such a failure can be modelled on a micro-level with an adequately
predicted window, a planned stop action can be taken in advance instead of a passive
fast stop which will often result in severe damage to equipment. In this case, a multifaceted modelling approach with a sliding window strategy is proposed. First,
historical multivariate time-series data of a cold rolling process were extracted in a
run-to-failure manner, and a sliding window strategy was adopted for data annotation.
Second, breakage-centric features were identified from physics-based approaches,
empirical knowledge and data-driven features. Finally, these features were used as
inputs for strip breakage modelling using a Recurrent Neural Network (RNN).
Experimental results have demonstrated the merits of the proposed approach.
Thirdly, among the heterogeneous data surrounding multi-faceted concepts in steelmaking, a significant amount of data consists of rich semantic information, such as
technical documents and production logs generated through the process. Also, there
Abstract v
exists vast domain knowledge regarding the production failures in steel-making, which
has a long history. In this context, proper semantic technologies are desired for the
utilisation of semantic data and domain knowledge in steel-making. In recent studies,
a Knowledge Graph (KG) displays a powerful expressive ability and a high degree of
modelling flexibility, making it a promising semantic network. However, building a
reliable KG is usually time-consuming and labour-intensive, and it is common that KG
needs to be refined or completed before using in industrial scenarios. In this case, a
fault-centric KG construction approach is proposed based on a hierarchy structure
refinement and relation completion. Firstly, ontology design based on hierarchy
structure refinement is conducted to improve reliability. Then, the missing relations
between each couple of entities were inferred based on existing knowledge in KG,
with the aim of increasing the number of edges that complete and refine KG. Lastly,
KG is constructed by importing data into the ontology. An illustrative case study on
strip breakage is conducted for validation.
Finally, multi-faceted modelling is often conducted based on multi-sourced data
covering indispensable aspects, and information fusion is typically applied to cope
with the high dimensionality and data heterogeneity. Besides the ability for knowledge
management and sharing, KG can aggregate the relationships of features from multiple
aspects by semantic associations, which can be exploited to facilitate the information
fusion for multi-faceted modelling with the consideration of intra-facets relationships.
In this case, process data is transformed into a stack of temporal graphs under the faultcentric KG backbone. Then, a Graph Convolutional Networks (GCN) model is applied
to extract temporal and attribute correlation features from the graphs, with a Temporal
Convolution Network (TCN) to conduct conceptual modelling using these features.
Experimental results derived using the proposed approach, and GCN-TCN reveal the
impacts of the proposed KG-aided fusion approach.
This thesis aims to research data mining in steel-making processes based on multisourced data and scattered distributed domain knowledge, which provides a feasibility
study for achieving Industry 4.0 in steel-making, specifically in support of improving
quality and reducing costs due to production failures
Recent advances and applications of machine learning in metal forming processes
Sem resumo disponível.publishe
Recent Advances and Applications of Machine Learning in Metal Forming Processes
Machine learning (ML) technologies are emerging in Mechanical Engineering, driven by the increasing availability of datasets, coupled with the exponential growth in computer performance. In fact, there has been a growing interest in evaluating the capabilities of ML algorithms to approach topics related to metal forming processes, such as: Classification, detection and prediction of forming defects; Material parameters identification; Material modelling; Process classification and selection; Process design and optimization. The purpose of this Special Issue is to disseminate state-of-the-art ML applications in metal forming processes, covering 10 papers about the abovementioned and related topics
NASA SBIR abstracts of 1991 phase 1 projects
The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included
Key Performance Monitoring and Diagnosis in Industrial Automation Processes
With ever increasing global competition, monitoring and diagnosis methods based on key performance indicator (KPI) are increasingly receiving attention in the process industry. Primarily due to the scale and complexity of modern automation processes, application of signal processing and model-based monitoring methods is too costly and time-consuming. On the other hand, due to the availability of cheap measurement and storage systems, a large amount of process and KPI data is obtained. As a result, developing data-driven KPI monitoring methods has become an area of great interest in both academics and industry. Therefore, this thesis is focused on the data-driven design of systematic KPI monitoring and diagnosis systems for industrial automation processes.
Depending on the relationship between the low-level process variables and the high-level KPIs, industrial processes can be classified into three groups:
1. Static processes (SPs) are those described by algebraic equations;
2. Lumped-parameter processes (LPPs) are those described by ordinary differential equations; and
3. Distributed-parameter processes (DPPs) are those described by partial differential equations.
For each of these groups of processes, analytical redundancy plays a very important role when developing efficient process monitoring tools. For SPs, multivariate-statistics-based methods have been used. However, their applicability is restricted by high mathematical complexity, high design costs and low diagnostic performance. For this reason, an alternative improved method has been proposed in this thesis. For LPPs, complex model-based methods have been implemented. Therefore, to reduce the design costs required for monitoring LPPs, efficient Subspace identification based approaches are presented. Finally, since there are very few available model-based methods for DPPs, this thesis presents novel approaches for KPI monitoring in DPPs. For all these methods, the design procedures are based on the process I/O data and do not require advanced mathematical knowledge.
After performance degradation has been detected, it is important to identify the root causes to prevent further losses. In industrial processes, performance degradation is more often caused by multiplicative faults. In this work, a new data-driven multiplicative fault diagnosis approach is proposed. This approach aims at assisting the maintenance personnel by narrowing down the investigation scope. As a result, overall equipment effectiveness (OEE) can be significantly improved.
To show the effectiveness of the proposed approaches, case studies on the Tennessee Eastman benchmark process, the continuous stirred tank heater benchmark and the simulated drying section of a paper machine have been performed. The proposed methods worked successfully with these processes.Key Performance Überwachung und Diagnose in industriellen Automatisierungsprozessen
Im Rahmen einer stetigen Zunahme des globalen Wettbewerbs erhalten Key Performance Indikator (KPI) basierte Überwachungs- und Diagnosetechniken zunehmend Aufmerksamkeit in der Prozessindustrie. Vor allem vor dem Hintergrund von Umfang und Komplexität moderner Automatisierungsprozesse ist die Anwendung von Signalverarbeitung und modellbasierten Überwachungstechniken zu teuer und zu zeitaufwendig. Andererseits ist häufig auf Grund der Verfügbarkeit von günstigen Mess- und Speichersystemen, eine große Menge von Prozess- und KPI-Daten vorhanden. Daher ist die Entwicklung von datenbasierten Verfahren ein Forschungsfeld, welches sowohl im akademischen als auch im industriellen Bereich mit großem Interesse verfolgt wird. Dementsprechend liegt der Fokus der vorliegenden Arbeit auf einem systematischen und datenbasierten Entwurf von KPI-Überwachungs- und -Diagnosesystemen für industrielle Automatisierungsprozesse.
Anhand der Beziehung zwischen den low-level Prozessgrößen und den high-level KPIs können industrielle Prozesse in drei Gruppen eingeteilt werden:
1. Statische Prozesse (SP) sind Prozesse, die sich durch algebraische Gleichungen beschrieben lassen;
2. Konzentrierte-Parameter Prozesse (KPP) sind Prozesse, welche durch gewöhnliche Differentialgleichungen beschrieben werden; und
3. Verteilte-Parameter Prozesse (VPP) sind Prozesse, welche durch partielle Differentialgleichungen beschrieben werden.
Für jede dieser Gruppen spielt das Konzept der analytischen Redundanz eine sehr wichtige Rolle bei der Entwicklung von effizienten Prozessüberwachungs-Tools. Für SP, sind multivariate statistische Verfahren verwendet worden. Allerdings ist deren Anwendbarkeit durch hohe mathematische Komplexität, einen hohen Entwurfsaufwand und eine geringen Diagnoseleistung beschränkt. Aus diesem Grund wird ein alternatives, verbessertes Verfahren in dieser Arbeit vorgeschlagen. Für KPP, sind komplexe modellbasierte Methoden implementiert worden. Um die Entwicklungskosten für die Überwachung der KPP zu reduzieren, wird eine effiziente Methode, basierend auf Subspace-Identifikation, vorgestellt. Da es nur sehr wenige modellbasierte Methoden für VPP gibt, präsentiert diese Arbeit schließlich neue Verfahren für die KPI- Überwachung in VPP. Alle vorgestellten Verfahren basieren auf den Prozess E/A Daten und erfordern daher keine tiefergehenden mathematischen Kenntnisse über den Prozess.
Nach erfolgreicher Erkennung des Leistungsabfalls eines KPI, ist es in einem nächsten Schritt erforderlich die Ursache zu identifizieren, um weitere ökonomische Verluste zu verhindern. In industriellen Prozessen wird ein Leistungsabfall häufig durch multiplikative Fehler verursacht. In dieser Arbeit wird ein neues datenbasiertes, multiplikatives Fehlerdiagnoseverfahren vorgeschlagen. Dieses Verfahren soll der Unterstützung des Wartungspersonals dienen, indem eine Eingrenzung der Problemursache vorgenommen wird. Als Ergebnis kann somit die OEE (Overall Equipment Effectiveness) deutlich verbessert werden.
Um die Wirksamkeit der vorgeschlagenen Verfahren zu demonstrieren, wurden verschiedene Fallstudien an Hand des „Tennessee Eastman“ Benchmark, des „continuous stirred tank heater“ Benchmark und einer simulierten Trockenpartie einer Papiermaschine durchgeführt. Die Effektivität der vorgeschlagenen Methoden konnte an Hand der aufgeführten Benchmark Prozesse erfolgreich gezeigt werden
Cognitive Control Systems in Steel Processing Lines for Minimised Energy Consumption and Higher Product Quality (Cognitive Control) : Final Report
The aim of Cognitive Control was to create cognitive automation systems with the capabilities automatic control performance monitoring (CPM), self-detection and automatic diagnosis of faults (sensors, actuators, controller) and self-adaptation in control system environments to optimise the product quality and minimise energy consumption in steel during the whole life cycle. In this project several software tools for online Control Performance Monitoring (CPM), monitoring energy efficiency, diagnosis of poor performance root-causes and control re-tuning for univariable and multivariable, linear and nonlinear processes were developed. The software tools were Graphical User Interface (GUI) that provided interface to access process data. The implemented methodologies were subsequently published as conference and journal papers. The methods were tested at hot strip mills, annealing furnaces and galvanizing lines
- …