3,625 research outputs found

    Hierarchical Framework for Interpretable and Probabilistic Model-Based Safe Reinforcement Learning

    Full text link
    The difficulty of identifying the physical model of complex systems has led to exploring methods that do not rely on such complex modeling of the systems. Deep reinforcement learning has been the pioneer for solving this problem without the need for relying on the physical model of complex systems by just interacting with it. However, it uses a black-box learning approach that makes it difficult to be applied within real-world and safety-critical systems without providing explanations of the actions derived by the model. Furthermore, an open research question in deep reinforcement learning is how to focus the policy learning of critical decisions within a sparse domain. This paper proposes a novel approach for the use of deep reinforcement learning in safety-critical systems. It combines the advantages of probabilistic modeling and reinforcement learning with the added benefits of interpretability and works in collaboration and synchronization with conventional decision-making strategies. The BC-SRLA is activated in specific situations which are identified autonomously through the fused information of probabilistic model and reinforcement learning, such as abnormal conditions or when the system is near-to-failure. Further, it is initialized with a baseline policy using policy cloning to allow minimum interactions with the environment to address the challenges associated with using RL in safety-critical industries. The effectiveness of the BC-SRLA is demonstrated through a case study in maintenance applied to turbofan engines, where it shows superior performance to the prior art and other baselines.Comment: arXiv admin note: text overlap with arXiv:2206.1343

    Machine Learning in Manufacturing towards Industry 4.0: From ‘For Now’ to ‘Four-Know’

    Get PDF
    While attracting increasing research attention in science and technology, Machine Learning (ML) is playing a critical role in the digitalization of manufacturing operations towards Industry 4.0. Recently, ML has been applied in several fields of production engineering to solve a variety of tasks with different levels of complexity and performance. However, in spite of the enormous number of ML use cases, there is no guidance or standard for developing ML solutions from ideation to deployment. This paper aims to address this problem by proposing an ML application roadmap for the manufacturing industry based on the state-of-the-art published research on the topic. First, this paper presents two dimensions for formulating ML tasks, namely, ’Four-Know’ (Know-what, Know-why, Know-when, Know-how) and ’Four-Level’ (Product, Process, Machine, System). These are used to analyze ML development trends in manufacturing. Then, the paper provides an implementation pipeline starting from the very early stages of ML solution development and summarizes the available ML methods, including supervised learning methods, semi-supervised methods, unsupervised methods, and reinforcement methods, along with their typical applications. Finally, the paper discusses the current challenges during ML applications and provides an outline of possible directions for future developments

    Review and Comparison of Intelligent Optimization Modelling Techniques for Energy Forecasting and Condition-Based Maintenance in PV Plants

    Get PDF
    Within the field of soft computing, intelligent optimization modelling techniques include various major techniques in artificial intelligence. These techniques pretend to generate new business knowledge transforming sets of "raw data" into business value. One of the principal applications of these techniques is related to the design of predictive analytics for the improvement of advanced CBM (condition-based maintenance) strategies and energy production forecasting. These advanced techniques can be used to transform control system data, operational data and maintenance event data to failure diagnostic and prognostic knowledge and, ultimately, to derive expected energy generation. One of the systems where these techniques can be applied with massive potential impact are the legacy monitoring systems existing in solar PV energy generation plants. These systems produce a great amount of data over time, while at the same time they demand an important e ort in order to increase their performance through the use of more accurate predictive analytics to reduce production losses having a direct impact on ROI. How to choose the most suitable techniques to apply is one of the problems to address. This paper presents a review and a comparative analysis of six intelligent optimization modelling techniques, which have been applied on a PV plant case study, using the energy production forecast as the decision variable. The methodology proposed not only pretends to elicit the most accurate solution but also validates the results, in comparison with the di erent outputs for the di erent techniques

    Scalable Concept Extraction in Industry 4.0

    Full text link
    The industry 4.0 is leveraging digital technologies and machine learning techniques to connect and optimize manufacturing processes. Central to this idea is the ability to transform raw data into human understandable knowledge for reliable data-driven decision-making. Convolutional Neural Networks (CNNs) have been instrumental in processing image data, yet, their ``black box'' nature complicates the understanding of their prediction process. In this context, recent advances in the field of eXplainable Artificial Intelligence (XAI) have proposed the extraction and localization of concepts, or which visual cues intervene on the prediction process of CNNs. This paper tackles the application of concept extraction (CE) methods to industry 4.0 scenarios. To this end, we modify a recently developed technique, ``Extracting Concepts with Local Aggregated Descriptors'' (ECLAD), improving its scalability. Specifically, we propose a novel procedure for calculating concept importance, utilizing a wrapper function designed for CNNs. This process is aimed at decreasing the number of times each image needs to be evaluated. Subsequently, we demonstrate the potential of CE methods, by applying them in three industrial use cases. We selected three representative use cases in the context of quality control for material design (tailored textiles), manufacturing (carbon fiber reinforcement), and maintenance (photovoltaic module inspection). In these examples, CE was able to successfully extract and locate concepts directly related to each task. This is, the visual cues related to each concept, coincided with what human experts would use to perform the task themselves, even when the visual cues were entangled between multiple classes. Through empirical results, we show that CE can be applied for understanding CNNs in an industrial context, giving useful insights that can relate to domain knowledge

    State of AI-based monitoring in smart manufacturing and introduction to focused section

    Get PDF
    Over the past few decades, intelligentization, supported by artificial intelligence (AI) technologies, has become an important trend for industrial manufacturing, accelerating the development of smart manufacturing. In modern industries, standard AI has been endowed with additional attributes, yielding the so-called industrial artificial intelligence (IAI) that has become the technical core of smart manufacturing. AI-powered manufacturing brings remarkable improvements in many aspects of closed-loop production chains from manufacturing processes to end product logistics. In particular, IAI incorporating domain knowledge has benefited the area of production monitoring considerably. Advanced AI methods such as deep neural networks, adversarial training, and transfer learning have been widely used to support both diagnostics and predictive maintenance of the entire production process. It is generally believed that IAI is the critical technologies needed to drive the future evolution of industrial manufacturing. This article offers a comprehensive overview of AI-powered manufacturing and its applications in monitoring. More specifically, it summarizes the key technologies of IAI and discusses their typical application scenarios with respect to three major aspects of production monitoring: fault diagnosis, remaining useful life prediction, and quality inspection. In addition, the existing problems and future research directions of IAI are also discussed. This article further introduces the papers in this focused section on AI-based monitoring in smart manufacturing by weaving them into the overview, highlighting how they contribute to and extend the body of literature in this area

    Explainable Predictive Maintenance

    Full text link
    Explainable Artificial Intelligence (XAI) fills the role of a critical interface fostering interactions between sophisticated intelligent systems and diverse individuals, including data scientists, domain experts, end-users, and more. It aids in deciphering the intricate internal mechanisms of ``black box'' Machine Learning (ML), rendering the reasons behind their decisions more understandable. However, current research in XAI primarily focuses on two aspects; ways to facilitate user trust, or to debug and refine the ML model. The majority of it falls short of recognising the diverse types of explanations needed in broader contexts, as different users and varied application areas necessitate solutions tailored to their specific needs. One such domain is Predictive Maintenance (PdM), an exploding area of research under the Industry 4.0 \& 5.0 umbrella. This position paper highlights the gap between existing XAI methodologies and the specific requirements for explanations within industrial applications, particularly the Predictive Maintenance field. Despite explainability's crucial role, this subject remains a relatively under-explored area, making this paper a pioneering attempt to bring relevant challenges to the research community's attention. We provide an overview of predictive maintenance tasks and accentuate the need and varying purposes for corresponding explanations. We then list and describe XAI techniques commonly employed in the literature, discussing their suitability for PdM tasks. Finally, to make the ideas and claims more concrete, we demonstrate XAI applied in four specific industrial use cases: commercial vehicles, metro trains, steel plants, and wind farms, spotlighting areas requiring further research.Comment: 51 pages, 9 figure

    Using recurrent neural networks to predict the time for an event

    Get PDF
    Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2018, Tutor: Jordi Vitrià i Marca[en] One of the main concerns of the manufacturing industry is the constant threat of unplanned stops. Even if the maintenance guidelines are followed for all the components of the line, these downtimes are common and they affect the productivity. Most of what is done nowadays in the manufacturing plants involves classic statistics, and sometimes online monitoring. However, in most of the industries the data related to the process is monitored and saved for regulatory purposes. Unfortunately it’s barely used, while the actual technologies offer a wide horizon of possibilities. The time to an event is a primary outcome of interest in many fields e.g., medical research, customer churn, etc. And we think that it’s also very interesting for Predictive Maintenance. The time to an event (or in this context time to failure) is typically positively skewed, subject to censoring, and explained by time varying variables. Therefore conventional statistic learning techniques such as linear regression or random forests don’t apply. Instead we have to relate on more complex methods. In particular we focus on the WTTE-RNN framework proposed by Egil Martinsson, which employs Recurrent Neural Networks to predict the parameters of a Weibull Distribution. The result is a flexible and powerful model specially suited for timedistributed data that can be organized in batches

    A Hierarchical, Fuzzy Inference Approach to Data Filtration and Feature Prioritization in the Connected Manufacturing Enterprise

    Get PDF
    The current big data landscape is one such that the technology and capability to capture and storage of data has preceded and outpaced the corresponding capability to analyze and interpret it. This has led naturally to the development of elegant and powerful algorithms for data mining, machine learning, and artificial intelligence to harness the potential of the big data environment. A competing reality, however, is that limitations exist in how and to what extent human beings can process complex information. The convergence of these realities is a tension between the technical sophistication or elegance of a solution and its transparency or interpretability by the human data scientist or decision maker. This dissertation, contextualized in the connected manufacturing enterprise, presents an original Fuzzy Approach to Feature Reduction and Prioritization (FAFRAP) approach that is designed to assist the data scientist in filtering and prioritizing data for inclusion in supervised machine learning models. A set of sequential filters reduces the initial set of independent variables, and a fuzzy inference system outputs a crisp numeric value associated with each feature to rank order and prioritize for inclusion in model training. Additionally, the fuzzy inference system outputs a descriptive label to assist in the interpretation of the feature’s usefulness with respect to the problem of interest. Model testing is performed using three publicly available datasets from an online machine learning data repository and later applied to a case study in electronic assembly manufacture. Consistency of model results is experimentally verified using Fisher’s Exact Test, and results of filtered models are compared to results obtained by the unfiltered sets of features using a proposed novel metric of performance-size ratio (PSR)

    From the digital data revolution to digital health and digital economy toward a digital society: Pervasiveness of Artificial Intelligence

    Get PDF
    Technological progress has led to powerful computers and communication technologies that penetrate nowadays all areas of science, industry and our private lives. As a consequence, all these areas are generating digital traces of data amounting to big data resources. This opens unprecedented opportunities but also challenges toward the analysis, management, interpretation and utilization of these data. Fortunately, recent breakthroughs in deep learning algorithms complement now machine learning and statistics methods for an efficient analysis of such data. Furthermore, advances in text mining and natural language processing, e.g., word-embedding methods, enable also the processing of large amounts of text data from diverse sources as governmental reports, blog entries in social media or clinical health records of patients. In this paper, we present a perspective on the role of artificial intelligence in these developments and discuss also potential problems we are facing in a digital society
    • …
    corecore