18 research outputs found

    Moving towards data-driven decision-making in maintenance

    No full text
    Traditionally, in the maintenance industry, maintenance efficiency is limited by the capability of the experts making the decision. However, the advancement of digital technologies made it possible to improve the effectiveness and efficiency of maintenance activities by adding insight from the data to expert assessment. The opportunity provided by data for decision making made the companies to shift towards a new type of maintenance strategy called data-driven maintenance. Despite of opportunities, data and analytical tools' companies are still struggling to fully harness data asset to improve maintenance activities because of data-centric challenges. Hence, the main objective of this dissertation is to identify and mitigate those challenges that limit organizational decision-making capabilities to improve maintenance effectiveness. In this dissertation, firstly, quantitative and descriptive analyses of case studies in Finnish Multinational Manufacturing Companies have been carried out to identify key data-centric challenges. The study identified Data Quality, Interoperability, and Data extraction as key challenges. Furthermore, each of the identified challenges have been investigated through one or more original publications. The main results achieved in this dissertation are methods and frameworks to i) assess and compare data quality of maintenance  reporting procedure ii) two-level interoperability framework for inter-system interoperability iii) data discovery methodology to extract data for Extract, Transform and Load process. The applicability and validity of each of the proposed methodologies and framework has been validated through one or multiple use cases. For validation, three different tools namely, MRQA Dashboard, Open-messaging Middleware, and Data Model Logger have been developed to tackle each of the identified data-centric challenges

    Context-specific sampling method for contextual explanations

    No full text
    Explaining the result of machine learning models is an active research topic in Artificial Intelligence (AI) domain with an objective to provide mechanisms to understand and interpret the results of the underlying black-box model in a human-understandable form. With this objective, several eXplainable Artificial Intelligence (XAI) methods have been designed and developed based on varied fundamental principles. Some methods such as Local interpretable model agnostic explanations (LIME), SHAP (SHapley Additive exPlanations) are based on the surrogate model while others such as Contextual Importance and Utility (CIU) do not create or rely on the surrogate model to generate its explanation. Despite the difference in underlying principles, these methods use different sampling techniques such as uniform sampling, weighted sampling for generating explanations. CIU, which emphasizes a context-aware decision explanation, employs a uniform sampling method for the generation of representative samples. In this research, we target uniform sampling methods which generate representative samples that do not guarantee to be representative in the presence of strong non-linearities or exceptional input feature value combinations. The objective of this research is to develop a sampling method that addresses these concerns. To address this need, a new adaptive weighted sampling method has been proposed. In order to verify its efficacy in generating explanations, the proposed method has been integrated with CIU, and tested by deploying the special test casePeer reviewe

    Data Exchange Standard for Industrial Internet of Things

    No full text
    | openaire: EC/H2020/688203/EU//BIoTopeIndustrial Internet of things is becoming a boon to Original Equipment Manufacturers (OEMs) offering after-sales services such as condition-based maintenance and extended warranty for their products. These companies leverage novel digital information infrastructures to improve daily industrial activities, including data collection, remote monitoring and advanced condition-based maintenance services. The emergence of digital infrastructure and new business prospects via servi-tization and quality services encourage companies to collect vast amounts of data that have been generated in different stages of product lifecycles. Despite of the potential benefits, companies are unable to fully harness the opportunities presented by digital information infrastructure because there exist several platforms with variations in technologies and standards resulting in interoperability challenges. This becomes particularly critical when a company sells its products to several clients with different technologies. To overcome such challenges, we investigate the Open Messaging Interface (O-MI) and Open Data Format (O-DF), flexible messaging and data exchange standards that enable seamless integration of different systems. These standards enable interoperability and support time-centric, event-centric, and rate-centric modes of data exchange.Peer reviewe

    MeDI

    No full text
    | openaire: EC/H2020/688203/EU//BIoTopeIoT systems may provide information from different sensors that may reveal potentially confidential data, such as a person's presence or not. The primary question to address is how we can identify the sensors and other devices in a reliable way before receiving data from them and using or sharing it. In other words, we need to verify the identity of sensors and devices. A malicious device could claim that it is the legitimate sensor and trigger security problems. For instance, it might send false data about the environment, harmfully affecting the outputs and behavior of the system. For this purpose, using only primary identity values such as IP address, MAC address, and even the public-key cryptography key pair is not enough since IPs can be dynamic, MACs can be spoofed, and cryptography key pairs can be stolen. Therefore, the server requires supplementary security considerations such as contextual features to verify the device identity. This paper presents a measurement-based method to detect and alert false data reports during the reception process by means of sensor behavior. As a proof of concept, we develop a classification-based methodology for device identification, which can be implemented in a real IoT scenario.Peer reviewe

    Key Data Quality Pitfalls for Condition Based Maintenance

    No full text
    | openaire: EC/H2020/688203/EU//BIoTopeIn today's competitive and fluctuating market, original equipment manufacturers (OEMs) must be able to offer aftersales services along with their products, such as condition based maintenance, extended warranty services etc. Condition based maintenance requires detailed understanding about products' operational behaviour, to detect problems before they occur, and react accordingly. Typically, Condition based maintenance consists of data collection, data analysis, and maintenance decision stages. Within this context, data quality is one of the key drivers in the knowledge acquisition process since poor data quality impacts the downstream maintenance processes, and reciprocally, high data quality will foster good decision making. The prospect of new business opportunities and better services to customers encourages companies to collect large amounts of data that have been generated in different stages of product lifecycle. Despite of availability of data, as well as advanced statistical and analytical tools, companies are still struggling to provide effective service by reducing maintenance cost and improving uptime. This paper highlights data related pitfalls that hinder organisations to improve maintenance services. These pitfalls are based on case studies of two globally operating Finnish manufacturing companies where maintenance is one of the major streams of income.Peer reviewe

    Data Quality Assessment of Company's Maintenance Reporting: A Case Study

    Get PDF
    Businesses are increasingly using their enterprise data for strategic decision-making activities. In fact, information, derived from data, has become one of the most important tools for businesses to gain competitive edge. Data quality assessment has become a hot topic in numerous sectors and considerable research has been carried out in this respect, although most of the existing frameworks often need to be adapted with respect to the use case needs and features. Within this context, this paper develops a methodology for assessing the quality of enterprises’ daily maintenance reporting, relying both on an existing data quality framework and on a Multi-Criteria Decision Making (MCDM) technique. Our methodology is applied in cooperation with a Finnish multinational company in order to evaluate and rank different company sites/office branches (carrying out maintenance activities) according to the quality of their data reporting. Based on this evaluation, the industrial partner wants to establish new action plans for enhanced reporting practices

    Data Model Logger - Data Discovery for Extract-Transform-Load

    No full text
    | openaire: EC/H2020/688203/EU//BIoTopeInformation Systems (ISs) are fundamental to streamline operations and support processes of any modern enterprise. Being able to perform analytics over the data managed in various enterprise ISs is becoming increasingly important for organisational growth. Extract, Transform, and Load (ETL) are the necessary pre-processing steps of any data mining activity. Due to the complexity of modern IS, extracting data is becoming increasingly complicated and time-consuming. In order to ease the process, this paper proposes a methodology and a pilot implementation, that aims to simplify data extraction process by leveraging the end-users’ knowledge and understanding of the specific IS. This paper first provides a brief introduction and the current state of the art regarding existing ETL process and techniques. Then, it explains in details the proposed methodology. Finally, test results of typical data-extraction tasks from 4 commercial ISs are reported.Peer reviewe

    Comparison of Contextual Importance and Utility with LIME and Shapley Values

    Get PDF
    Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values

    IoT-based Interoperability Framework for Asset and Fleet Management

    No full text

    Explaining Machine Learning-based Classifications of in-vivo Gastral Images

    No full text
    This paper proposes an explainable machine learning tool that can potentially be used for decision support in medical image analysis scenarios. For a decision-support system it is important to be able to reverse-engineer the impact of features on the final decision outcome. In the medical domain, such functionality is typically required to allow applying machine learning to clinical decision making. In this paper, we present initial experiments that have been performed on in-vivo gastral images obtained from capsule endoscopy. Quantitative analysis has been performed to evaluate the utility of the proposed method. Convolutional neural networks have been used for training the validating of the image data set to provide the bleeding classifications. The visual explanations have been provided in the images to help health professionals trust the black box predictions. While the paper focuses on the in-vivo gastral image use case, most findings are generalizable.Peer reviewe
    corecore