284 research outputs found

    Data mining for fault diagnosis in steel making process under industry 4.0

    Get PDF
    The concept of Industry 4.0 (I4.0) refers to the intelligent networking of machines and processes in the industry, which is enabled by cyber-physical systems (CPS) - a technology that utilises embedded networked systems to achieve intelligent control. CPS enable full traceability of production processes as well as comprehensive data assignments in real-time. Through real-time communication and coordination between "manufacturing things", production systems, in the form of Cyber-Physical Production Systems (CPPS), can make intelligent decisions. Meanwhile, with the advent of I4.0, it is possible to collect heterogeneous manufacturing data across various facets for fault diagnosis by using the industrial internet of things (IIoT) techniques. Under this data-rich environment, the ability to diagnose and predict production failures provides manufacturing companies with a strategic advantage by reducing the number of unplanned production outages. This advantage is particularly desired for steel-making industries. As a consecutive and compact manufacturing process, process downtime is a major concern for steel-making companies since most of the operations should be conducted within a certain temperature range. In addition, steel-making consists of complex processes that involve physical, chemical, and mechanical elements, emphasising the necessity for data-driven approaches to handle high-dimensionality problems. For a modern steel-making plant, various measurement devices are deployed throughout this manufacturing process with the advancement of I4.0 technologies, which facilitate data acquisition and storage. However, even though data-driven approaches are showing merits and being widely applied in the manufacturing context, how to build a deep learning model for fault prediction in the steel-making process considering multiple contributing facets and its temporal characteristic has not been investigated. Additionally, apart from the multitudinous data, it is also worthwhile to study how to represent and utilise the vast and scattered distributed domain knowledge along the steel-making process for fault modelling. Moreover, state-of-the-art does not iv Abstract address how such accumulated domain knowledge and its semantics can be harnessed to facilitate the fusion of multi-sourced data in steel manufacturing. In this case, the purpose of this thesis is to pave the way for fault diagnosis in steel-making processes using data mining under I4.0. This research is structured according to four themes. Firstly, different from the conventional data-driven research that only focuses on modelling based on numerical production data, a framework for data mining for fault diagnosis in steel-making based on multi-sourced data and knowledge is proposed. There are five layers designed in this framework, which are multi-sourced data and knowledge acquisition, data and knowledge processing, KG construction and graphical data transformation, KG-aided modelling for fault diagnosis and decision support for steel manufacturing. Secondly, another of the purposes of this thesis is to propose a predictive, data-driven approach to model severe faults in the steel-making process, where the faults are usually with multi-faceted causes. Specifically, strip breakage in cold rolling is selected as the modelling target since it is a typical production failure with serious consequences and multitudinous factors contributing to it. In actual steel-making practice, if such a failure can be modelled on a micro-level with an adequately predicted window, a planned stop action can be taken in advance instead of a passive fast stop which will often result in severe damage to equipment. In this case, a multifaceted modelling approach with a sliding window strategy is proposed. First, historical multivariate time-series data of a cold rolling process were extracted in a run-to-failure manner, and a sliding window strategy was adopted for data annotation. Second, breakage-centric features were identified from physics-based approaches, empirical knowledge and data-driven features. Finally, these features were used as inputs for strip breakage modelling using a Recurrent Neural Network (RNN). Experimental results have demonstrated the merits of the proposed approach. Thirdly, among the heterogeneous data surrounding multi-faceted concepts in steelmaking, a significant amount of data consists of rich semantic information, such as technical documents and production logs generated through the process. Also, there Abstract v exists vast domain knowledge regarding the production failures in steel-making, which has a long history. In this context, proper semantic technologies are desired for the utilisation of semantic data and domain knowledge in steel-making. In recent studies, a Knowledge Graph (KG) displays a powerful expressive ability and a high degree of modelling flexibility, making it a promising semantic network. However, building a reliable KG is usually time-consuming and labour-intensive, and it is common that KG needs to be refined or completed before using in industrial scenarios. In this case, a fault-centric KG construction approach is proposed based on a hierarchy structure refinement and relation completion. Firstly, ontology design based on hierarchy structure refinement is conducted to improve reliability. Then, the missing relations between each couple of entities were inferred based on existing knowledge in KG, with the aim of increasing the number of edges that complete and refine KG. Lastly, KG is constructed by importing data into the ontology. An illustrative case study on strip breakage is conducted for validation. Finally, multi-faceted modelling is often conducted based on multi-sourced data covering indispensable aspects, and information fusion is typically applied to cope with the high dimensionality and data heterogeneity. Besides the ability for knowledge management and sharing, KG can aggregate the relationships of features from multiple aspects by semantic associations, which can be exploited to facilitate the information fusion for multi-faceted modelling with the consideration of intra-facets relationships. In this case, process data is transformed into a stack of temporal graphs under the faultcentric KG backbone. Then, a Graph Convolutional Networks (GCN) model is applied to extract temporal and attribute correlation features from the graphs, with a Temporal Convolution Network (TCN) to conduct conceptual modelling using these features. Experimental results derived using the proposed approach, and GCN-TCN reveal the impacts of the proposed KG-aided fusion approach. This thesis aims to research data mining in steel-making processes based on multisourced data and scattered distributed domain knowledge, which provides a feasibility study for achieving Industry 4.0 in steel-making, specifically in support of improving quality and reducing costs due to production failures

    A Modeling and Analysis Framework To Support Monitoring, Assessment, and Control of Manufacturing Systems Using Hybrid Models

    Full text link
    The manufacturing industry has constantly been challenged to improve productivity, adapt to continuous changes in demand, and reduce cost. The need for a competitive advantage has motivated research for new modeling and control strategies able to support reconfiguration considering the coupling between different aspects of plant floor operations. However, models of manufacturing systems usually capture the process flow and machine capabilities while neglecting the machine dynamics. The disjoint analysis of system-level interactions and machine-level dynamics limits the effectiveness of performance assessment and control strategies. This dissertation addresses the enhancement of productivity and adaptability of manufacturing systems by monitoring and controlling both the behavior of independent machines and their interactions. A novel control framework is introduced to support performance monitoring and decision making using real-time simulation, anomaly detection, and multi-objective optimization. The intellectual merit of this dissertation lies in (1) the development a mathematical framework to create hybrid models of both machines and systems capable of running in real-time, (2) the algorithms to improve anomaly detection and diagnosis using context-sensitive adaptive threshold limits combined with context-specific classification models, and (3) the construction of a simulation-based optimization strategy to support decision making considering the inherent trade-offs between productivity, quality, reliability, and energy usage. The result is a framework that transforms the state-of-the-art of manufacturing by enabling real-time performance monitoring, assessment, and control of plant floor operations. The control strategy aims to improve the productivity and sustainability of manufacturing systems using multi-objective optimization. The outcomes of this dissertation were implemented in an experimental testbed. Results demonstrate the potential to support maintenance actions, productivity analysis, and decision making in manufacturing systems. Furthermore, the proposed framework lays the foundation for a seamless integration of real systems and virtual models. The broader impact of this dissertation is the advancement of manufacturing science that is crucial to support economic growth. The implementation of the framework proposed in this dissertation can result in higher productivity, lower downtime, and energy savings. Although the project focuses on discrete manufacturing with a flow shop configuration, the control framework, modeling strategy, and optimization approach can be translated to job shop configurations or batch processes. Moreover, the algorithms and infrastructure implemented in the testbed at the University of Michigan can be integrated into automation and control products for wide availability.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147657/1/migsae_1.pd

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Intelligent maintenance management in a reconfigurable manufacturing environment using multi-agent systems

    Get PDF
    Thesis (M. Tech.) -- Central University of Technology, Free State, 2010Traditional corrective maintenance is both costly and ineffective. In some situations it is more cost effective to replace a device than to maintain it; however it is far more likely that the cost of the device far outweighs the cost of performing routine maintenance. These device related costs coupled with the profit loss due to reduced production levels, makes this reactive maintenance approach unacceptably inefficient in many situations. Blind predictive maintenance without considering the actual physical state of the hardware is an improvement, but is still far from ideal. Simply maintaining devices on a schedule without taking into account the operational hours and workload can be a costly mistake. The inefficiencies associated with these approaches have contributed to the development of proactive maintenance strategies. These approaches take the device health state into account. For this reason, proactive maintenance strategies are inherently more efficient compared to the aforementioned traditional approaches. Predicting the health degradation of devices allows for easier anticipation of the required maintenance resources and costs. Maintenance can also be scheduled to accommodate production needs. This work represents the design and simulation of an intelligent maintenance management system that incorporates device health prognosis with maintenance schedule generation. The simulation scenario provided prognostic data to be used to schedule devices for maintenance. A production rule engine was provided with a feasible starting schedule. This schedule was then improved and the process was determined by adhering to a set of criteria. Benchmarks were conducted to show the benefit of optimising the starting schedule and the results were presented as proof. Improving on existing maintenance approaches will result in several benefits for an organisation. Eliminating the need to address unexpected failures or perform maintenance prematurely will ensure that the relevant resources are available when they are required. This will in turn reduce the expenditure related to wasted maintenance resources without compromising the health of devices or systems in the organisation

    An investigation into the prognosis of electromagnetic relays.

    Get PDF
    Electrical contacts provide a well-proven solution to switching various loads in a wide variety of applications, such as power distribution, control applications, automotive and telecommunications. However, electrical contacts are known for limited reliability due to degradation effects upon the switching contacts due to arcing and fretting. Essentially, the life of the device may be determined by the limited life of the contacts. Failure to trip, spurious tripping and contact welding can, in critical applications such as control systems for avionics and nuclear power application, cause significant costs due to downtime, as well as safety implications. Prognostics provides a way to assess the remaining useful life (RUL) of a component based on its current state of health and its anticipated future usage and operating conditions. In this thesis, the effects of contact wear on a set of electromagnetic relays used in an avionic power controller is examined, and how contact resistance combined with a prognostic approach, can be used to ascertain the RUL of the device. Two methodologies are presented, firstly a Physics based Model (PbM) of the degradation using the predicted material loss due to arc damage. Secondly a computationally efficient technique using posterior degradation data to form a state space model in real time via a Sliding Window Recursive Least Squares (SWRLS) algorithm. Health monitoring using the presented techniques can provide knowledge of impending failure in high reliability applications where the risks associated with loss-of-functionality are too high to endure. The future states of the systems has been estimated based on a Particle and Kalman-filter projection of the models via a Bayesian framework. Performance of the prognostication health management algorithm during the contacts life has been quantified using performance evaluation metrics. Model predictions have been correlated with experimental data. Prognostic metrics including Prognostic Horizon (PH), alpha-Lamda (α-λ), and Relative Accuracy have been used to assess the performance of the damage proxies and a comparison of the two models made
    • …
    corecore