336 research outputs found
Towards A Computational Intelligence Framework in Steel Product Quality and Cost Control
Steel is a fundamental raw material for all industries. It can be widely used in vari-ous fields, including construction, bridges, ships, containers, medical devices and cars. However, the production process of iron and steel is very perplexing, which consists of four processes: ironmaking, steelmaking, continuous casting and rolling. It is also extremely complicated to control the quality of steel during the full manufacturing pro-cess. Therefore, the quality control of steel is considered as a huge challenge for the whole steel industry. This thesis studies the quality control, taking the case of Nanjing Iron and Steel Group, and then provides new approaches for quality analysis, manage-ment and control of the industry.
At present, Nanjing Iron and Steel Group has established a quality management and control system, which oversees many systems involved in the steel manufacturing. It poses a high statistical requirement for business professionals, resulting in a limited use of the system. A lot of data of quality has been collected in each system. At present, all systems mainly pay attention to the processing and analysis of the data after the manufacturing process, and the quality problems of the products are mainly tested by sampling-experimental method. This method cannot detect product quality or predict in advance the hidden quality issues in a timely manner. In the quality control system, the responsibilities and functions of different information systems involved are intricate. Each information system is merely responsible for storing the data of its corresponding functions. Hence, the data in each information system is relatively isolated, forming a data island. The iron and steel production process belongs to the process industry. The data in multiple information systems can be combined to analyze and predict the quality of products in depth and provide an early warning alert. Therefore, it is necessary to introduce new product quality control methods in the steel industry. With the waves of industry 4.0 and intelligent manufacturing, intelligent technology has also been in-troduced in the field of quality control to improve the competitiveness of the iron and steel enterprises in the industry. Applying intelligent technology can generate accurate quality analysis and optimal prediction results based on the data distributed in the fac-tory and determine the online adjustment of the production process. This not only gives rise to the product quality control, but is also beneficial to in the reduction of product costs. Inspired from this, this paper provide in-depth discussion in three chapters: (1) For scrap steel to be used as raw material, how to use artificial intelligence algorithms to evaluate its quality grade is studied in chapter 3; (2) the probability that the longi-tudinal crack occurs on the surface of continuous casting slab is studied in chapter 4;(3) The prediction of mechanical properties of finished steel plate in chapter 5. All these 3 chapters will serve as the technical support of quality control in iron and steel production
Data-Driven Dynamic Modeling for Prediction of Molten Iron Silicon Content Using ELM with Self-Feedback
Silicon content ([Si] for short) of the molten metal is an important index reflecting the product quality and thermal status of the blast furnace (BF) ironmaking process. Since the online detection of [Si] is difficult and larger time delay exists in the offline assay procedure, quality modeling is required to achieve online estimation of [Si]. Focusing on this problem, a data-driven dynamic modeling method is proposed using improved extreme learning machine (ELM) with the help of principle component analysis (PCA). First, data-driven PCA is introduced to pick out the most pivotal variables from multitudinous factors to serve as the secondary variables of modeling. Second, a novel data-driven ELM modeling technology with good generalization performance and nonlinear mapping capability is presented by applying a self-feedback structure on traditional ELM. The feedback outputs at previous time together with input variables at different time constitute a dynamic ELM structure which has a storage ability to tackle data in different time and overcomes the limitation of static modeling of traditional ELM. At last, industrial experiments demonstrate that the proposed method has a better modeling and estimating accuracy as well as a faster learning speed when compared with different modeling methods with different model structures
Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines
In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Reinforcement Learning algorithms require a large number of samples to solve
complex tasks with sparse and delayed rewards. Complex tasks can often be
hierarchically decomposed into sub-tasks. A step in the Q-function can be
associated with solving a sub-task, where the expectation of the return
increases. RUDDER has been introduced to identify these steps and then
redistribute reward to them, thus immediately giving reward if sub-tasks are
solved. Since the problem of delayed rewards is mitigated, learning is
considerably sped up. However, for complex tasks, current exploration
strategies as deployed in RUDDER struggle with discovering episodes with high
rewards. Therefore, we assume that episodes with high rewards are given as
demonstrations and do not have to be discovered by exploration. Typically the
number of demonstrations is small and RUDDER's LSTM model as a deep learning
method does not learn well. Hence, we introduce Align-RUDDER, which is RUDDER
with two major modifications. First, Align-RUDDER assumes that episodes with
high rewards are given as demonstrations, replacing RUDDER's safe exploration
and lessons replay buffer. Second, we replace RUDDER's LSTM model by a profile
model that is obtained from multiple sequence alignment of demonstrations.
Profile models can be constructed from as few as two demonstrations as known
from bioinformatics. Align-RUDDER inherits the concept of reward
redistribution, which considerably reduces the delay of rewards, thus speeding
up learning. Align-RUDDER outperforms competitors on complex artificial tasks
with delayed reward and few demonstrations. On the MineCraft ObtainDiamond
task, Align-RUDDER is able to mine a diamond, though not frequently. Github:
https://github.com/ml-jku/align-rudder, YouTube: https://youtu.be/HO-_8ZUl-U
Process Modeling in Pyrometallurgical Engineering
The Special Issue presents almost 40 papers on recent research in modeling of pyrometallurgical systems, including physical models, first-principles models, detailed CFD and DEM models as well as statistical models or models based on machine learning. The models cover the whole production chain from raw materials processing through the reduction and conversion unit processes to ladle treatment, casting, and rolling. The papers illustrate how models can be used for shedding light on complex and inaccessible processes characterized by high temperatures and hostile environment, in order to improve process performance, product quality, or yield and to reduce the requirements of virgin raw materials and to suppress harmful emissions
Data mining for fault diagnosis in steel making process under industry 4.0
The concept of Industry 4.0 (I4.0) refers to the intelligent networking of machines and
processes in the industry, which is enabled by cyber-physical systems (CPS) - a
technology that utilises embedded networked systems to achieve intelligent control.
CPS enable full traceability of production processes as well as comprehensive data
assignments in real-time. Through real-time communication and coordination between
"manufacturing things", production systems, in the form of Cyber-Physical Production
Systems (CPPS), can make intelligent decisions. Meanwhile, with the advent of I4.0,
it is possible to collect heterogeneous manufacturing data across various facets for
fault diagnosis by using the industrial internet of things (IIoT) techniques. Under this
data-rich environment, the ability to diagnose and predict production failures provides
manufacturing companies with a strategic advantage by reducing the number of
unplanned production outages. This advantage is particularly desired for steel-making
industries. As a consecutive and compact manufacturing process, process downtime is
a major concern for steel-making companies since most of the operations should be
conducted within a certain temperature range. In addition, steel-making consists of
complex processes that involve physical, chemical, and mechanical elements,
emphasising the necessity for data-driven approaches to handle high-dimensionality
problems.
For a modern steel-making plant, various measurement devices are deployed
throughout this manufacturing process with the advancement of I4.0 technologies,
which facilitate data acquisition and storage. However, even though data-driven
approaches are showing merits and being widely applied in the manufacturing context,
how to build a deep learning model for fault prediction in the steel-making process
considering multiple contributing facets and its temporal characteristic has not been
investigated. Additionally, apart from the multitudinous data, it is also worthwhile to
study how to represent and utilise the vast and scattered distributed domain knowledge
along the steel-making process for fault modelling. Moreover, state-of-the-art does not
iv Abstract
address how such accumulated domain knowledge and its semantics can be harnessed
to facilitate the fusion of multi-sourced data in steel manufacturing. In this case, the
purpose of this thesis is to pave the way for fault diagnosis in steel-making processes
using data mining under I4.0.
This research is structured according to four themes. Firstly, different from the
conventional data-driven research that only focuses on modelling based on numerical
production data, a framework for data mining for fault diagnosis in steel-making based
on multi-sourced data and knowledge is proposed. There are five layers designed in
this framework, which are multi-sourced data and knowledge acquisition, data and
knowledge processing, KG construction and graphical data transformation, KG-aided
modelling for fault diagnosis and decision support for steel manufacturing.
Secondly, another of the purposes of this thesis is to propose a predictive, data-driven
approach to model severe faults in the steel-making process, where the faults are
usually with multi-faceted causes. Specifically, strip breakage in cold rolling is
selected as the modelling target since it is a typical production failure with serious
consequences and multitudinous factors contributing to it. In actual steel-making
practice, if such a failure can be modelled on a micro-level with an adequately
predicted window, a planned stop action can be taken in advance instead of a passive
fast stop which will often result in severe damage to equipment. In this case, a multifaceted modelling approach with a sliding window strategy is proposed. First,
historical multivariate time-series data of a cold rolling process were extracted in a
run-to-failure manner, and a sliding window strategy was adopted for data annotation.
Second, breakage-centric features were identified from physics-based approaches,
empirical knowledge and data-driven features. Finally, these features were used as
inputs for strip breakage modelling using a Recurrent Neural Network (RNN).
Experimental results have demonstrated the merits of the proposed approach.
Thirdly, among the heterogeneous data surrounding multi-faceted concepts in steelmaking, a significant amount of data consists of rich semantic information, such as
technical documents and production logs generated through the process. Also, there
Abstract v
exists vast domain knowledge regarding the production failures in steel-making, which
has a long history. In this context, proper semantic technologies are desired for the
utilisation of semantic data and domain knowledge in steel-making. In recent studies,
a Knowledge Graph (KG) displays a powerful expressive ability and a high degree of
modelling flexibility, making it a promising semantic network. However, building a
reliable KG is usually time-consuming and labour-intensive, and it is common that KG
needs to be refined or completed before using in industrial scenarios. In this case, a
fault-centric KG construction approach is proposed based on a hierarchy structure
refinement and relation completion. Firstly, ontology design based on hierarchy
structure refinement is conducted to improve reliability. Then, the missing relations
between each couple of entities were inferred based on existing knowledge in KG,
with the aim of increasing the number of edges that complete and refine KG. Lastly,
KG is constructed by importing data into the ontology. An illustrative case study on
strip breakage is conducted for validation.
Finally, multi-faceted modelling is often conducted based on multi-sourced data
covering indispensable aspects, and information fusion is typically applied to cope
with the high dimensionality and data heterogeneity. Besides the ability for knowledge
management and sharing, KG can aggregate the relationships of features from multiple
aspects by semantic associations, which can be exploited to facilitate the information
fusion for multi-faceted modelling with the consideration of intra-facets relationships.
In this case, process data is transformed into a stack of temporal graphs under the faultcentric KG backbone. Then, a Graph Convolutional Networks (GCN) model is applied
to extract temporal and attribute correlation features from the graphs, with a Temporal
Convolution Network (TCN) to conduct conceptual modelling using these features.
Experimental results derived using the proposed approach, and GCN-TCN reveal the
impacts of the proposed KG-aided fusion approach.
This thesis aims to research data mining in steel-making processes based on multisourced data and scattered distributed domain knowledge, which provides a feasibility
study for achieving Industry 4.0 in steel-making, specifically in support of improving
quality and reducing costs due to production failures
A Literature Review of Fault Diagnosis Based on Ensemble Learning
The accuracy of fault diagnosis is an important indicator to ensure the reliability of key equipment systems. Ensemble learning integrates different weak learning methods to obtain stronger learning and has achieved remarkable results in the field of fault diagnosis. This paper reviews the recent research on ensemble learning from both technical and field application perspectives. The paper summarizes 87 journals in recent web of science and other academic resources, with a total of 209 papers. It summarizes 78 different ensemble learning based fault diagnosis methods, involving 18 public datasets and more than 20 different equipment systems. In detail, the paper summarizes the accuracy rates, fault classification types, fault datasets, used data signals, learners (traditional machine learning or deep learning-based learners), ensemble learning methods (bagging, boosting, stacking and other ensemble models) of these fault diagnosis models. The paper uses accuracy of fault diagnosis as the main evaluation metrics supplemented by generalization and imbalanced data processing ability to evaluate the performance of those ensemble learning methods. The discussion and evaluation of these methods lead to valuable research references in identifying and developing appropriate intelligent fault diagnosis models for various equipment. This paper also discusses and explores the technical challenges, lessons learned from the review and future development directions in the field of ensemble learning based fault diagnosis and intelligent maintenance
- …