585 research outputs found
Recent advances and applications of machine learning in metal forming processes
Sem resumo disponível.publishe
Recent Advances and Applications of Machine Learning in Metal Forming Processes
Machine learning (ML) technologies are emerging in Mechanical Engineering, driven by the increasing availability of datasets, coupled with the exponential growth in computer performance. In fact, there has been a growing interest in evaluating the capabilities of ML algorithms to approach topics related to metal forming processes, such as: Classification, detection and prediction of forming defects; Material parameters identification; Material modelling; Process classification and selection; Process design and optimization. The purpose of this Special Issue is to disseminate state-of-the-art ML applications in metal forming processes, covering 10 papers about the abovementioned and related topics
Depth Data Denoising in Optical Laser Based Sensors for Metal Sheet Flatness Measurement: A Deep Learning Approach
Surface flatness assessment is necessary for quality control of metal sheets manufactured from steel coils by roll leveling and cutting. Mechanical-contact-based flatness sensors are being replaced by modern laser-based optical sensors that deliver accurate and dense reconstruction of metal sheet surfaces for flatness index computation. However, the surface range images captured by these optical sensors are corrupted by very specific kinds of noise due to vibrations caused by mechanical processes like degreasing, cleaning, polishing, shearing, and transporting roll systems. Therefore, high-quality flatness optical measurement systems strongly depend on the quality of image denoising methods applied to extract the true surface height image. This paper presents a deep learning architecture for removing these specific kinds of noise from the range images obtained by a laser based range sensor installed in a rolling and shearing line, in order to allow accurate flatness measurements from the clean range images. The proposed convolutional blind residual denoising network (CBRDNet) is composed of a noise estimation module and a noise removal module implemented by specific adaptation of semantic convolutional neural networks. The CBRDNet is validated on both synthetic and real noisy range image data that exhibit the most critical kinds of noise that arise throughout the metal sheet production process. Real data were obtained from a single laser line triangulation flatness sensor installed in a roll leveling and cut to length line. Computational experiments over both synthetic and real datasets clearly demonstrate that CBRDNet achieves superior performance in comparison to traditional 1D and 2D filtering methods, and state-of-the-art CNN-based denoising techniques. The experimental validation results show a reduction in error than can be up to 15% relative to solutions based on traditional 1D and 2D filtering methods and between 10% and 3% relative to the other deep learning denoising architectures recently reported in the literature.This work was partially supported by by FEDER funds through MINECO project TIN2017-85827-P, and ELKARTEK funded projects ENSOL2 and CODISAVA2 (KK-202000077 and KK-202000044) supported by the Basque Governmen
Data modelling and Remaining Useful Life estimation of rolls in a steel making cold rolling process
The economic cost of roll refurbishment in the steel-making industry is considerable. In a cold rolling mill, wear and damage of rolls disrupt the industrial environment, so it is critical to predict the remaining useful life early and change the roll without causing disruption to the manufacturing process. However, since cold rolling is a complex process affected by multiple variables which are operated in adverse conditions, it is very challenging to mathematically analyse the roll wear and failure. For this reason, in the present paper, a data-driven solution is proposed to predict the correct time for changing individual rolls. To develop an accurate predictive model, several datasets containing high-resolution production data and roll refurbishment data collected from a UK based steel plant have been acquired and processed in a way that the roll wear is modelled as a Remaining Useful Life (RUL) problem, where the number of coils that a roll is able to process is viewed as the remaining cycles. Then hybrid deep learning models are used to predict the Remaining Useful Life of rolls used in steel making. This novel data-driven approach achieves high prediction accuracy and has been validated on a real-world dataset. The proposed approach not only helps avoiding early failure but also can serve as a critical step towards the design of an optimal, automated maintenance schedule for the roll management
Multi-sourced modelling for strip breakage using knowledge graph embeddings
Strip breakage is an undesired production failure in cold rolling. Typically, conventional studies focused on cause analyses, and existing data-driven approaches only rely on a single data source, resulting in a limited amount of information. Hence, we propose an approach for modelling breakage using multiple data sources. Many breakage-relevant features from multiple sources are identified and used, and these features are integrated using a breakage-centric ontology which is then used to create knowledge graphs. Through ontology construction and knowledge
embedding, a real-world study using data from a cold-rolled strip manufacturer was conducted using the proposed approach
Explainable Predictive Maintenance
Explainable Artificial Intelligence (XAI) fills the role of a critical
interface fostering interactions between sophisticated intelligent systems and
diverse individuals, including data scientists, domain experts, end-users, and
more. It aids in deciphering the intricate internal mechanisms of ``black box''
Machine Learning (ML), rendering the reasons behind their decisions more
understandable. However, current research in XAI primarily focuses on two
aspects; ways to facilitate user trust, or to debug and refine the ML model.
The majority of it falls short of recognising the diverse types of explanations
needed in broader contexts, as different users and varied application areas
necessitate solutions tailored to their specific needs.
One such domain is Predictive Maintenance (PdM), an exploding area of
research under the Industry 4.0 \& 5.0 umbrella. This position paper highlights
the gap between existing XAI methodologies and the specific requirements for
explanations within industrial applications, particularly the Predictive
Maintenance field. Despite explainability's crucial role, this subject remains
a relatively under-explored area, making this paper a pioneering attempt to
bring relevant challenges to the research community's attention. We provide an
overview of predictive maintenance tasks and accentuate the need and varying
purposes for corresponding explanations. We then list and describe XAI
techniques commonly employed in the literature, discussing their suitability
for PdM tasks. Finally, to make the ideas and claims more concrete, we
demonstrate XAI applied in four specific industrial use cases: commercial
vehicles, metro trains, steel plants, and wind farms, spotlighting areas
requiring further research.Comment: 51 pages, 9 figure
Towards A Computational Intelligence Framework in Steel Product Quality and Cost Control
Steel is a fundamental raw material for all industries. It can be widely used in vari-ous fields, including construction, bridges, ships, containers, medical devices and cars. However, the production process of iron and steel is very perplexing, which consists of four processes: ironmaking, steelmaking, continuous casting and rolling. It is also extremely complicated to control the quality of steel during the full manufacturing pro-cess. Therefore, the quality control of steel is considered as a huge challenge for the whole steel industry. This thesis studies the quality control, taking the case of Nanjing Iron and Steel Group, and then provides new approaches for quality analysis, manage-ment and control of the industry.
At present, Nanjing Iron and Steel Group has established a quality management and control system, which oversees many systems involved in the steel manufacturing. It poses a high statistical requirement for business professionals, resulting in a limited use of the system. A lot of data of quality has been collected in each system. At present, all systems mainly pay attention to the processing and analysis of the data after the manufacturing process, and the quality problems of the products are mainly tested by sampling-experimental method. This method cannot detect product quality or predict in advance the hidden quality issues in a timely manner. In the quality control system, the responsibilities and functions of different information systems involved are intricate. Each information system is merely responsible for storing the data of its corresponding functions. Hence, the data in each information system is relatively isolated, forming a data island. The iron and steel production process belongs to the process industry. The data in multiple information systems can be combined to analyze and predict the quality of products in depth and provide an early warning alert. Therefore, it is necessary to introduce new product quality control methods in the steel industry. With the waves of industry 4.0 and intelligent manufacturing, intelligent technology has also been in-troduced in the field of quality control to improve the competitiveness of the iron and steel enterprises in the industry. Applying intelligent technology can generate accurate quality analysis and optimal prediction results based on the data distributed in the fac-tory and determine the online adjustment of the production process. This not only gives rise to the product quality control, but is also beneficial to in the reduction of product costs. Inspired from this, this paper provide in-depth discussion in three chapters: (1) For scrap steel to be used as raw material, how to use artificial intelligence algorithms to evaluate its quality grade is studied in chapter 3; (2) the probability that the longi-tudinal crack occurs on the surface of continuous casting slab is studied in chapter 4;(3) The prediction of mechanical properties of finished steel plate in chapter 5. All these 3 chapters will serve as the technical support of quality control in iron and steel production
Data mining for fault diagnosis in steel making process under industry 4.0
The concept of Industry 4.0 (I4.0) refers to the intelligent networking of machines and
processes in the industry, which is enabled by cyber-physical systems (CPS) - a
technology that utilises embedded networked systems to achieve intelligent control.
CPS enable full traceability of production processes as well as comprehensive data
assignments in real-time. Through real-time communication and coordination between
"manufacturing things", production systems, in the form of Cyber-Physical Production
Systems (CPPS), can make intelligent decisions. Meanwhile, with the advent of I4.0,
it is possible to collect heterogeneous manufacturing data across various facets for
fault diagnosis by using the industrial internet of things (IIoT) techniques. Under this
data-rich environment, the ability to diagnose and predict production failures provides
manufacturing companies with a strategic advantage by reducing the number of
unplanned production outages. This advantage is particularly desired for steel-making
industries. As a consecutive and compact manufacturing process, process downtime is
a major concern for steel-making companies since most of the operations should be
conducted within a certain temperature range. In addition, steel-making consists of
complex processes that involve physical, chemical, and mechanical elements,
emphasising the necessity for data-driven approaches to handle high-dimensionality
problems.
For a modern steel-making plant, various measurement devices are deployed
throughout this manufacturing process with the advancement of I4.0 technologies,
which facilitate data acquisition and storage. However, even though data-driven
approaches are showing merits and being widely applied in the manufacturing context,
how to build a deep learning model for fault prediction in the steel-making process
considering multiple contributing facets and its temporal characteristic has not been
investigated. Additionally, apart from the multitudinous data, it is also worthwhile to
study how to represent and utilise the vast and scattered distributed domain knowledge
along the steel-making process for fault modelling. Moreover, state-of-the-art does not
iv Abstract
address how such accumulated domain knowledge and its semantics can be harnessed
to facilitate the fusion of multi-sourced data in steel manufacturing. In this case, the
purpose of this thesis is to pave the way for fault diagnosis in steel-making processes
using data mining under I4.0.
This research is structured according to four themes. Firstly, different from the
conventional data-driven research that only focuses on modelling based on numerical
production data, a framework for data mining for fault diagnosis in steel-making based
on multi-sourced data and knowledge is proposed. There are five layers designed in
this framework, which are multi-sourced data and knowledge acquisition, data and
knowledge processing, KG construction and graphical data transformation, KG-aided
modelling for fault diagnosis and decision support for steel manufacturing.
Secondly, another of the purposes of this thesis is to propose a predictive, data-driven
approach to model severe faults in the steel-making process, where the faults are
usually with multi-faceted causes. Specifically, strip breakage in cold rolling is
selected as the modelling target since it is a typical production failure with serious
consequences and multitudinous factors contributing to it. In actual steel-making
practice, if such a failure can be modelled on a micro-level with an adequately
predicted window, a planned stop action can be taken in advance instead of a passive
fast stop which will often result in severe damage to equipment. In this case, a multifaceted modelling approach with a sliding window strategy is proposed. First,
historical multivariate time-series data of a cold rolling process were extracted in a
run-to-failure manner, and a sliding window strategy was adopted for data annotation.
Second, breakage-centric features were identified from physics-based approaches,
empirical knowledge and data-driven features. Finally, these features were used as
inputs for strip breakage modelling using a Recurrent Neural Network (RNN).
Experimental results have demonstrated the merits of the proposed approach.
Thirdly, among the heterogeneous data surrounding multi-faceted concepts in steelmaking, a significant amount of data consists of rich semantic information, such as
technical documents and production logs generated through the process. Also, there
Abstract v
exists vast domain knowledge regarding the production failures in steel-making, which
has a long history. In this context, proper semantic technologies are desired for the
utilisation of semantic data and domain knowledge in steel-making. In recent studies,
a Knowledge Graph (KG) displays a powerful expressive ability and a high degree of
modelling flexibility, making it a promising semantic network. However, building a
reliable KG is usually time-consuming and labour-intensive, and it is common that KG
needs to be refined or completed before using in industrial scenarios. In this case, a
fault-centric KG construction approach is proposed based on a hierarchy structure
refinement and relation completion. Firstly, ontology design based on hierarchy
structure refinement is conducted to improve reliability. Then, the missing relations
between each couple of entities were inferred based on existing knowledge in KG,
with the aim of increasing the number of edges that complete and refine KG. Lastly,
KG is constructed by importing data into the ontology. An illustrative case study on
strip breakage is conducted for validation.
Finally, multi-faceted modelling is often conducted based on multi-sourced data
covering indispensable aspects, and information fusion is typically applied to cope
with the high dimensionality and data heterogeneity. Besides the ability for knowledge
management and sharing, KG can aggregate the relationships of features from multiple
aspects by semantic associations, which can be exploited to facilitate the information
fusion for multi-faceted modelling with the consideration of intra-facets relationships.
In this case, process data is transformed into a stack of temporal graphs under the faultcentric KG backbone. Then, a Graph Convolutional Networks (GCN) model is applied
to extract temporal and attribute correlation features from the graphs, with a Temporal
Convolution Network (TCN) to conduct conceptual modelling using these features.
Experimental results derived using the proposed approach, and GCN-TCN reveal the
impacts of the proposed KG-aided fusion approach.
This thesis aims to research data mining in steel-making processes based on multisourced data and scattered distributed domain knowledge, which provides a feasibility
study for achieving Industry 4.0 in steel-making, specifically in support of improving
quality and reducing costs due to production failures
- …