4,123 research outputs found

    Validating Uncertainty-Aware Virtual Sensors For Industry 4.0

    Get PDF
    In industry 4.0 manufacturing, sensors provide information about the state, behavior, and performance of processes. Therefore, one of the main goals of Industry 4.0 is to collect high-quality data to realize its business goal, namely zero-defect manufacturing, and high-quality products. However, hardware sensors cannot always gather quality data due to several factors. First, industrial 4.0 deploys sensors in harsh environments. Consequently, measurements are likely to be corrupted by errors such as outliers, noise, or missing values. Sensors can, over time, be subject to faults such as bias, drifting, complete failure, and precision degradation. Moreover, direct sensing of a process variable can be unavailable due to environmental constraints such as surface temperature being beyond the range of the physical sensor. A virtual sensor is a tools to solve these problems by allowing for online estimation of process variables when the physical sensor is unreliable or unavailable. Deep learning method is effective in developing virtual sensors; however, it assumes that the data used for training and deployment are independent and identical (i. i. d). Therefore, deep learning in high-risk environments, such as industry 4.0, is challenging because if i.i.d assumptions fail to hold, the model may make errors that lead to disastrous consequences, such as financial losses, reputational damage, or even death. We can prevent model mistakes only if the model estimates the uncertainty of its predictions. Unfortunately, current deep learning-based virtual sensors are created using frequentist models, making them unable to capture uncertainty accurately. In this thesis, we explore the possibility of Bayesian convolutional neural networks (BCNN) to generate uncertainty-aware virtual sensors for Industry 4.0. We use two publicly available realistic industrial datasets to generate virtual sensors and conduct experiments. CNC Mill Tool Wear data (CNC) from CNC milling machine provided by the University of Michigan, and Tennessee Eastman Process data (TEP) provided by Eastman Chemical Company for process monitoring and control studies. The root-mean-square error (RMSE), mean absolute percentage error (MAPE), and R-squared (R2) is used to evaluate the predictive capability of the generated virtual sensor. The performance is compared to that of the standard neural network-based virtual sensor, namely convolutional neural network (CNN) and long short-term memory (LSTM). We demonstrated Bayesian neural networks' ability to quantify uncertainty by computing the coverage probability of the uncertainty. Additionally, we tested whether the estimated uncertainty could detect changes in input data distribution using the fault injection method. Our BCNN virtual sensor had the best R-squared scores, with R2 = 0.99 on CNC and R2 = 0.98 on TEP data. The result of the coverage probability score indicates a reasonably good uncertainty estimate. However, despite predictive uncertainty detecting faults in input datasets, its accuracy declined as fault length increased

    RootPath: Root Cause and Critical Path Analysis to Ensure Sustainable and Resilient Consumer-Centric Big Data Processing under Fault Scenarios

    Get PDF
    The exponential growth of consumer-centric big data has led to increased concerns regarding the sustainability and resilience of data processing systems, particularly in the face of fault scenarios. This paper presents an innovative approach integrating Root Cause Analysis (RCA) and Critical Path Analysis (CPA) to address these challenges and ensure sustainable, resilient consumer-centric big data processing. The proposed methodology enables the identification of root causes behind system faults probabilistically, implementing Bayesian networks. Furthermore, an Artificial Neural Network (ANN)-based critical path method is employed to identify the critical path that causes high makespan in MapReduce workflows to enhance fault tolerance and optimize resource allocation. To evaluate the effectiveness of the proposed methodology, we conduct a series of fault injection experiments, simulating various real-world fault scenarios commonly encountered in operational environments. The experiment results show that both models perform very well with high accuracies, 95%, and 98%, respectively, enabling the development of more robust and reliable consumer-centric systems

    ADAPTS: An Intelligent Sustainable Conceptual Framework for Engineering Projects

    Get PDF
    This paper presents a conceptual framework for the optimization of environmental sustainability in engineering projects, both for products and industrial facilities or processes. The main objective of this work is to propose a conceptual framework to help researchers to approach optimization under the criteria of sustainability of engineering projects, making use of current Machine Learning techniques. For the development of this conceptual framework, a bibliographic search has been carried out on the Web of Science. From the selected documents and through a hermeneutic procedure the texts have been analyzed and the conceptual framework has been carried out. A graphic representation pyramid shape is shown to clearly define the variables of the proposed conceptual framework and their relationships. The conceptual framework consists of 5 dimensions; its acronym is ADAPTS. In the base are: (1) the Application to which it is intended, (2) the available DAta, (3) the APproach under which it is operated, and (4) the machine learning Tool used. At the top of the pyramid, (5) the necessary Sensing. A study case is proposed to show its applicability. This work is part of a broader line of research, in terms of optimization under sustainability criteria.Telefónica Chair “Intelligence in Networks” of the University of Seville (Spain

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Exploring Prognostic and Diagnostic Techniques for Jet Engine Health Monitoring: A Review of Degradation Mechanisms and Advanced Prediction Strategies

    Get PDF
    Maintenance is crucial for aircraft engines because of the demanding conditions to which they are exposed during operation. A proper maintenance plan is essential for ensuring safe flights and prolonging the life of the engines. It also plays a major role in managing costs for aeronautical companies. Various forms of degradation can affect different engine components. To optimize cost management, modern maintenance plans utilize diagnostic and prognostic techniques, such as Engine Health Monitoring (EHM), which assesses the health of the engine based on monitored parameters. In recent years, various EHM systems have been developed utilizing computational techniques. These algorithms are often enhanced by utilizing data reduction and noise filtering tools, which help to minimize computational time and efforts, and to improve performance by reducing noise from sensor data. This paper discusses the various mechanisms that lead to the degradation of aircraft engine components and the impact on engine performance. Additionally, it provides an overview of the most commonly used data reduction and diagnostic and prognostic techniques

    Data Mining Applications to Fault Diagnosis in Power Electronic Systems: A Systematic Review

    Get PDF

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems

    MACHINE LEARNING OPERATIONS (MLOPS) ARCHITECTURE CONSIDERATIONS FOR DEEP LEARNING WITH A PASSIVE ACOUSTIC VECTOR SENSOR

    Get PDF
    As machine learning augmented decision-making becomes more prevalent, defense applications for these techniques are needed to prevent being outpaced by peer adversaries. One area that has significant potential is deep learning applications to classify passive sonar acoustic signatures, which would accelerate tactical, operational, and strategic decision-making processes in one of the most contested and difficult warfare domains. Convolutional Neural Networks have achieved some of the greatest success in accomplishing this task; however, a full production pipeline to continually train, deploy, and evaluate acoustic deep learning models throughout their lifecycle in a realistic architecture is a barrier to further and more rapid success in this field of research. Two main contributions of this thesis are a proposed production architecture for model lifecycle management using Machine Learning Operations (MLOps) and evaluation of the same on live passive sonar stream. Using the proposed production architecture, this work evaluates model performance differences in a production setting and explores methods to improve model performance in production. Through documenting considerations for creating a platform and architecture to continuously train, deploy, and evaluate various deep learning acoustic classification models, this study aims to create a framework and recommendations to accelerate progress in acoustic deep learning classification research.Los Alamos National LabLieutenant, United States NavyApproved for public release. Distribution is unlimited

    Statistical Methods for Semiconductor Manufacturing

    Get PDF
    In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described. In particular, algorithms have been developed for two major applications area: - Virtual Metrology (VM) systems; - Predictive Maintenance (PdM) systems. Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs. VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead, are always available and that can be used for modeling without further costs. PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical methods and on the availability of process/logistic data, is in contrast with other classical approaches: - Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production; - Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations. Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process. The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor one, where lots of data are recorded and can be employed to build mathematical models. We present several original contributions, both in the form of applications and methods. The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems and the major issues for engineers and statisticians working in this area will be presented. Furthermore we will illustrate the methods and mathematical models used in the applications. We will then discuss in details the following applications: - A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems. - A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed. - A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated. - Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements. - A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process. - A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described. Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment
    corecore