19 research outputs found

    Blockchain Secured Dynamic Machine Learning Pipeline for Manufacturing

    No full text
    ML-based applications already play an important role in factories in areas such as visual quality inspection, process optimization, and maintenance prediction and will become even more important in the future. For ML to be used in an industrial setting in a safe and effective way, the different steps needed to use ML must be put together in an ML pipeline. The development of ML pipelines is usually conducted by several and changing external stakeholders because they are very complex constructs, and confidence in their work is not always clear. Thus, end-to-end trust in the ML pipeline is not granted automatically. This is because the components and processes in ML pipelines are not transparent. This can also cause problems with certification in areas where safety is very important, such as the medical field, where procedures and their results must be recorded in detail. In addition, there are security challenges, such as attacks on the model and the ML pipeline, that are difficult to detect. This paper provides an overview of ML security challenges that can arise in production environments and presents a framework on how to address data security and transparency in ML pipelines. The framework is presented using visual quality inspection as an example. The presented framework provides: (a) a tamper-proof data history, which achieves accountability and supports quality audits; (b) an increase in trust by protocol for the used ML pipeline, by rating the experts and entities involved in the ML pipeline and certifying legitimacy for participation; and (c) certification of the pipeline infrastructure, the ML model, data collection, and labelling. After describing the details of the new approach, the mitigation of the previously described security attacks will be demonstrated, and a conclusion will be drawn

    Verifiable Machine Learning Models in Industrial IoT via Blockchain

    No full text
    The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model

    Machine Learning Development Audit Framework: Assessment and Inspection of Risk and Quality of Data, Model and Development Process

    No full text
    The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model

    Data Confidentiality In P2P Communication And Smart Contracts Of Blockchain In Industry 4.0

    No full text
    Increased collaborative production and dynamic selection of production partners within industry 4.0 manufacturing leads to ever-increasing automatic data exchange between companies. Automatic and unsupervised data exchange creates new attack vectors, which could be used by a malicious insider to leak secrets via an otherwise considered secure channel without anyone noticing. In this paper we reflect upon approaches to prevent the exposure of secret data via blockchain technology, while also providing auditable proof of data exchange. We show that previous blockchain based privacy protection approaches offer protection, but give the control of the data to (potentially not trustworthy) third parties, which also can be considered a privacy violation. The approach taken in this paper is not utilize centralized data storage for data. It realizes data confidentiality of P2P communication and data processing in smart contracts of blockchains.Comment: 10 pages, 4 figure

    Unified Intersection over Union for Explainable Artificial Intelligence

    No full text
    Data scientists, researchers and engineers want to understand, whether machine learning models for object detection work accurate and precise. Networks like Yolo use bounding boxes as a result to localize the object in the image. The principal aim of this paper is to address the problem of a lack of an effective metric for evaluating the results of bounding box regression in object detection networks when boxes do not overlap or lie completely within each other. The standard known metrics, like IoU, lack of differentiating results, which do not overlap but differ in the distance between predicted bounding box and label. To solve this challenge, we propose a new metric called UIoU (Unified Intersection over Union) that combines the best properties of existing metrics (IoU, GIoU and DIoU) and extends them with a similarity factor. By assigning weight to each component of the metric, it allows for a clear differentiation between the three possible cases of box positions (not overlapping, overlapping, boxes inside each other). The result of this paper is a new metric that outperforms the existing metrics such as IoU, GIoU and DIoU by providing a more understandable measure of the performance of object detection models. This provides researchers and users in the field of explainable AI with a metric that allows the evaluation and comparison of prediction and label bounding boxes in an understandable way

    A Novel Metric for XAI Evaluation Incorporating Pixel Analysis and Distance Measurement

    No full text
    Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems

    Security Threats of a Blockchain-Based Platform for Industry Ecosystems in the Cloud

    No full text
    In modern industrial production lines, the integration and interconnection of various different manufacturing components, like robots, laser cutting machines, milling machines, CNC-machines, etc. allows for a higher degree of autonomous production on the shop floor. Manufacturers of these increasingly complex machines are beginning to equip their business models with bidirectional data flows to other factories. This is creating a digital, cross-company shop floor infrastructure where the transfer of information is controlled by digital contracts. To establish a trusted ecosystem, the new technology "blockchain" and a variety of technology stacks must be combined while ensuring security. Such blockchain-based frameworks enable bidirectional trust across all contract partners. Essential data flows are defined by specific technical representation of contract agreements and executed through smart contracts.This work describes a platform for rapid cross-company business model instantiation based on blockchain for establishing trust between the enterprises. It focuses on selected security aspects of the deployment- and configuration processes applied by the industrial ecosystem. A threat analysis of the platform shows the critical security risks. Based on an industrial dynamic machine leasing use case, a risk assessment and security analysis of the key platform components is carried out

    Explainable AI with Domain Adapted FastCAM for Endoscopy Images

    No full text
    Enormous potential of artificial intelligence (AI) exists in numerous products and services, especially in healthcare and medical technology. Explainability is a central prerequisite for certification procedures around the world and the fulfilment of transparency obligations. Explainability tools increase the comprehensibility of object recognition in images using Convolutional Neural Networks, but lack precision. This paper adapts FastCAM for the domain of detection of medical instruments in endoscopy images. The results show that the Domain Adapted (DA)-FastCAM provides better results for the focus of the model than standard FastCAM weights

    Agreements between Enterprises digitized by Smart Contracts in the Domain of Industry 4.0

    No full text
    The digital transformation of companies is expected to increase the digital interconnection between different companies to develop optimized, customized, hybrid business models. These cross-company business models require secure, reliable, and traceable logging and monitoring of contractually agreed information sharing between machine tools, operators, and service providers. This paper discusses how the major requirements for building hybrid business models can be tackled by the blockchain for building a chain of trust and smart contracts for digitized contracts. A machine maintenance use case is used to discuss the readiness of smart contracts for the automation of workflows defined in contracts. Furthermore, it is shown that the number of failures is significantly improved by using these contracts and a blockchain

    Machine Learning Models in Industrial Blockchain, Attacks and Contribution

    No full text
    The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model
    corecore