3,595 research outputs found

    A Bayesian network based learning system for modelling faults in large-scale manufacturing

    Get PDF
    Manufacturing companies can benefit from the early prediction and detection of failures to improve their product yield and reduce system faults through advanced data analytics. Whilst an abundance of data on their processing systems exist, they face difficulties in using it to gain insights to improve their systems. Bayesian networks (BNs) are considered here for diagnosing and predicting faults in a large manufacturing dataset from Bosch. Whilst BN structure learning has been performed traditionally on smaller sized data, this work demonstrates the ability to learn an appropriate BN structure for a large dataset with little information on the variables, for the first time. This paper also demonstrates a new framework for creating an appropriate probabilistic model for the Bosch dataset through the selection of statistically important variables on the response; this is then used to create a BN network which can be used to answer probabilistic queries and classify products based on changes in the sensor values in the production process.<br/

    A probabilistic model for information and sensor validation

    Get PDF
    This paper develops a new theory and model for information and sensor validation. The model represents relationships between variables using Bayesian networks and utilizes probabilistic propagation to estimate the expected values of variables. If the estimated value of a variable differs from the actual value, an apparent fault is detected. The fault is only apparent since it may be that the estimated value is itself based on faulty data. The theory extends our understanding of when it is possible to isolate real faults from potential faults and supports the development of an algorithm that is capable of isolating real faults without deferring the problem to the use of expert provided domain-specific rules. To enable practical adoption for real-time processes, an any time version of the algorithm is developed, that, unlike most other algorithms, is capable of returning improving assessments of the validity of the sensors as it accumulates more evidence with time. The developed model is tested by applying it to the validation of temperature sensors during the start-up phase of a gas turbine when conditions are not stable; a problem that is known to be challenging. The paper concludes with a discussion of the practical applicability and scalability of the model

    Interactive and Intelligent Root Cause Analysis in Manufacturing with Causal Bayesian Networks and Knowledge Graphs

    Full text link
    Root Cause Analysis (RCA) in the manufacturing of electric vehicles is the process of identifying fault causes. Traditionally, the RCA is conducted manually, relying on process expert knowledge. Meanwhile, sensor networks collect significant amounts of data in the manufacturing process. Using this data for RCA makes it more efficient. However, purely data-driven methods like Causal Bayesian Networks have problems scaling to large-scale, real-world manufacturing processes due to the vast amount of potential cause-effect relationships (CERs). Furthermore, purely data-driven methods have the potential to leave out already known CERs or to learn spurious CERs. The paper contributes by proposing an interactive and intelligent RCA tool that combines expert knowledge of an electric vehicle manufacturing process and a data-driven machine learning method. It uses reasoning over a large-scale Knowledge Graph of the manufacturing process while learning a Causal Bayesian Network. In addition, an Interactive User Interface enables a process expert to give feedback to the root cause graph by adding and removing information to the Knowledge Graph. The interactive and intelligent RCA tool reduces the learning time of the Causal Bayesian Network while decreasing the number of spurious CERs. Thus, the interactive and intelligent RCA tool closes the feedback loop between expert and machine learning method

    AI-enabled modeling and monitoring of data-rich advanced manufacturing systems

    Get PDF
    The infrastructure of cyber-physical systems (CPS) is based on a meta-concept of cybermanufacturing systems (CMS) that synchronizes the Industrial Internet of Things (IIoTs), Cloud Computing, Industrial Control Systems (ICSs), and Big Data analytics in manufacturing operations. Artificial Intelligence (AI) can be incorporated to make intelligent decisions in the day-to-day operations of CMS. Cyberattack spaces in AI-based cybermanufacturing operations pose significant challenges, including unauthorized modification of systems, loss of historical data, destructive malware, software malfunctioning, etc. However, a cybersecurity framework can be implemented to prevent unauthorized access, theft, damage, or other harmful attacks on electronic equipment, networks, and sensitive data. The five main cybersecurity framework steps are divided into procedures and countermeasure efforts, including identifying, protecting, detecting, responding, and recovering. Given the major challenges in AI-enabled cybermanufacturing systems, three research objectives are proposed in this dissertation by incorporating cybersecurity frameworks. The first research aims to detect the in-situ additive manufacturing (AM) process authentication problem using high-volume video streaming data. A side-channel monitoring approach based on an in-situ optical imaging system is established, and a tensor-based layer-wise texture descriptor is constructed to describe the observed printing path. Subsequently, multilinear principal component analysis (MPCA) is leveraged to reduce the dimension of the tensor-based texture descriptor, and low-dimensional features can be extracted for detecting attack-induced alterations. The second research work seeks to address the high-volume data stream problems in multi-channel sensor fusion for diverse bearing fault diagnosis. This second approach proposes a new multi-channel sensor fusion method by integrating acoustics and vibration signals with different sampling rates and limited training data. The frequency-domain tensor is decomposed by MPCA, resulting in low-dimensional process features for diverse bearing fault diagnosis by incorporating a Neural Network classifier. By linking the second proposed method, the third research endeavor is aligned to recovery systems of multi-channel sensing signals when a substantial amount of missing data exists due to sensor malfunction or transmission issues. This study has leveraged a fully Bayesian CANDECOMP/PARAFAC (FBCP) factorization method that enables to capture of multi-linear interaction (channels × signals) among latent factors of sensor signals and imputes missing entries based on observed signals

    Artificial intelligence for superconducting transformers

    Get PDF
    Artificial intelligence (AI) techniques are currently widely used in different parts of the electrical engineering sector due to their privileges for being used in smarter manufacturing and accurate and efficient operating of electric devices. Power transformers are a vital and expensive asset in the power network, where their consistent and fault-free operation greatly impacts the reliability of the whole system. The superconducting transformer has the potential to fully modernize the power network in the near future with its invincible advantages, including much lighter weight, more compact size, much lower loss, and higher efficiency compared with conventional oil-immersed counterparts. In this article, we have looked into the perspective of using AI for revolutionizing superconducting transformer technology in many aspects related to their design, operation, condition monitoring, maintenance, and asset management. We believe that this article offers a roadmap for what could be and needs to be done in the current decade 2020-2030 to integrate AI into superconducting transformer technology

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    Developing Methods of Obtaining Quality Failure Information from Complex Systems

    Get PDF
    The complexity in most engineering systems is constantly growing due to ever-increasing technological advancements. This result in a corresponding need for methods that adequately account for the reliability of such systems based on failure information from components that make up these systems. This dissertation presents an approach to validating qualitative function failure results from model abstraction details. The impact of the level of detail available to a system designer during conceptual stages of design is considered for failure space exploration in a complex system. Specifically, the study develops an efficient approach towards detailed function and behavior modeling required for complex system analyses. In addition, a comprehensive research and documentation of existing function failure analysis methodologies is also synthesized into identified structural groupings. Using simulations, known governing equations are evaluated for components and system models to study responses to faults by accounting for detailed failure scenarios, component behaviors, fault propagation paths, and overall system performance. The components were simulated at nominal states and varying degrees of fault representing actual modes of operation. Information on product design and provisions on expected working conditions of components were used in the simulations to address normally overlooked areas during installation. The results of system model simulations were investigated using clustering analysis to develop an efficient grouping method and measure of confidence for the obtained results. The intellectual merit of this work is the use of a simulation based approach in studying how generated failure scenarios reveal component fault interactions leading to a better understanding of fault propagation within design models. The information from using varying fidelity models for system analysis help in identifying models that are sufficient enough at the conceptual design stages to highlight potential faults. This will reduce resources such as cost, manpower and time spent during system design. A broader impact of the project is to help design engineers identifying critical components, quantifying risks associated with using particular components in their prototypes early in the design process and help improving fault tolerant system designs. This research looks to eventually establishing a baseline for validating and comparing theories of complex systems analysis

    Fault Detection and Isolation in Industrial Processes Using Deep Learning Approaches

    Get PDF
    Automated fault detection is an important part of a quality control system. It has the potential to increase the overall quality of monitored products and processes. The fault detection of automotive instrument cluster systems in computer- based manufacturing assembly lines is currently limited to simple boundary checking. The analysis of more complex non-linear signals is performed manually by trained operators, whose knowledge is used to supervise quality checking and manual detection of faults. In this paper, a novel approach for automated fault detection and isolation based on deep machine learning techniques is presented. The approach was tested on data generated by computer-based manufacturing systems equipped with local and remote sensing devices. The results show that the proposed approach models the different spatial / temporal patterns found in the data. The approach is also able to successfully diagnose and locate multiple classes of faults under real-time working conditions. The proposed method is shown to outperform other established fault detection and isolation methods

    A method to classify steel plate faults based on ensemble learning

    Get PDF
    With the industrial revolution 4.0, machine learning methods are widely used in all aspects of manufacturing to perform quality prediction, fault diagnosis, or maintenance. In the steel industry, it is important to precisely detect faults/defects in order to produce high-quality steel plates. However, determining the exact first-principal model between process parameters and mechanical properties is a challenging process. In addition, steel plate defects are detected through manual, costly, and less productive offline inspection in the traditional manufacturing process of steel. Therefore, it is a great necessity to enable the automatic detection of steel plate faults. To this end, this study explores the capabilities of the following three machine learning models Adaboost, Bagging, and Random Forest in detecting steel plate faults. The well-known steel plate failure dataset provided by Communication Sciences Research Centre Semeion was used in this study. The aim of many studies using this dataset is to correctly classify defects in steel plates using traditional machine learning models, ignoring the applicability of the developed models to real-world problems. Manufacturing is a dynamic process with constant adjustments and improvements. For this reason, it is necessary to establish a learning process that determines the best model based on the arrival of new information. Contrary to previous studies on the steel plate failure dataset, this article presents a systematic modelling approach that includes the normalization step in the data preparation stage to reduce the effects of outliers, the feature selection step in the dimension reduction stage to develop a machine learning model with fewer inputs, and hyperparameter optimization step in the model development stage to increase the accuracy of the machine learning model. The performances of the developed machine learning models were compared according to statistical metrics in terms of precision, recall, sensitivity, and accuracy. The results revealed that AdaBoost performed well on this dataset, achieving accuracy scores of 93.15% and 91.90% for the training and test datasets, respectively
    corecore