293 research outputs found

    A Mini Review on the utilization of Reinforcement Learning with OPC UA

    Full text link
    Reinforcement Learning (RL) is a powerful machine learning paradigm that has been applied in various fields such as robotics, natural language processing and game playing achieving state-of-the-art results. Targeted to solve sequential decision making problems, it is by design able to learn from experience and therefore adapt to changing dynamic environments. These capabilities make it a prime candidate for controlling and optimizing complex processes in industry. The key to fully exploiting this potential is the seamless integration of RL into existing industrial systems. The industrial communication standard Open Platform Communications UnifiedArchitecture (OPC UA) could bridge this gap. However, since RL and OPC UA are from different fields,there is a need for researchers to bridge the gap between the two technologies. This work serves to bridge this gap by providing a brief technical overview of both technologies and carrying out a semi-exhaustive literature review to gain insights on how RL and OPC UA are applied in combination. With this survey, three main research topics have been identified, following the intersection of RL with OPC UA. The results of the literature review show that RL is a promising technology for the control and optimization of industrial processes, but does not yet have the necessary standardized interfaces to be deployed in real-world scenarios with reasonably low effort.Comment: submitted to INDIN'2

    Knowledge-driven Artificial Intelligence in Steelmaking: Towards Industry 4.0

    Get PDF
    With the ongoing emergence of the Fourth Industrial Revolution, often referred to as Indus-try 4.0, new innovations, concepts, and standards are reshaping manufacturing processes and production, leading to intelligent cyber-physical systems and smart factories. Steel production is one important manufacturing process that is undergoing this digital transfor-mation. Realising this vision in steel production comes with unique challenges, including the seamless interoperability between diverse and complex systems, the uniformity of het-erogeneous data, and a need for standardised human-to-machine and machine-to-machine communication protocols. To address these challenges, international standards have been developed, and new technologies have been introduced and studied in both industry and academia. However, due to the vast quantity, scale, and heterogeneous nature of industrial data and systems, achieving interoperability among components within the context of Industry 4.0 remains a challenge, requiring the need for formal knowledge representation capabilities to enhance the understanding of data and information. In response, semantic-based technologies have been proposed as a method to capture knowledge from data and resolve incompatibility conflicts within Industry 4.0 scenarios. We propose utilising fundamental Semantic Web concepts, such as ontologies and knowledge graphs, specifically to enhance semantic interoperability, improve data integration, and standardise data across heterogeneous systems within the context of steelmaking. Addition-ally, we investigate ongoing trends that involve the integration of Machine Learning (ML)techniques with semantic technologies, resulting in the creation of hybrid models. These models capitalise on the strengths derived from the intersection of these two AI approaches.Furthermore, we explore the need for continuous reasoning over data streams, presenting preliminary research that combines ML and semantic technologies in the context of data streams. In this thesis, we make four main contributions: (1) We discover that a clear under-standing of semantic-based asset administration shells, an international standard within the RAMI 4.0 model, was lacking, and provide an extensive survey on semantic-based implementations of asset administration shells. We focus on literature that utilises semantic technologies to enhance the representation, integration, and exchange of information in an industrial setting. (2) The creation of an ontology, a semantic knowledge base, which specifically captures the cold rolling processes in steelmaking. We demonstrate use cases that leverage these semantic methodologies with real-world industrial data for data access, data integration, data querying, and condition-based maintenance purposes. (3) A frame-work demonstrating one approach for integrating machine learning models with semantic technologies to aid decision-making in the domain of steelmaking. We showcase a novel approach of applying random forest classification using rule-based reasoning, incorporating both meta-data and external domain expert knowledge into the model, resulting in improved knowledge-guided assistance for the human-in-the-loop during steelmaking processes. (4) The groundwork for a continuous data stream reasoning framework, where both domain expert knowledge and random forest classification can be dynamically applied to data streams on the fly. This approach opens up possibilities for real-time condition-based monitoring and real-time decision support for predictive maintenance applications. We demonstrate the adaptability of the framework in the context of dynamic steel production processes. Our contributions have been validated on both real-world data sets with peer-reviewed conferences and journals, as well as through collaboration with domain experts from our industrial partners at Tata Steel

    Towards a real-time capable plug & produce environment for adaptable factories

    Get PDF
    Industrial manufacturing is currently undergoing a transformation from mass production with inflexible production systems to individual production with adaptable cells. In order to ensure this adaptability of these systems, technologies such as plug & produce are needed, to integrate, modify and remove devices at runtime. Therefor an exact description of the system, the products and the capabilities / skills of the devices is essential as well as a network for communication between the devices. Deterministic data transmission is particularly important for distributed control systems. We propose an architecture for plug & produce mechanisms with hard real-time capable communication paths between the cyber-physical components using OPC UA PubSub over TSN and the ability to load and execute real-time critical tasks at runtime

    Koneoppimiskehys petrokemianteollisuuden sovelluksille

    Get PDF
    Machine learning has many potentially useful applications in process industry, for example in process monitoring and control. Continuously accumulating process data and the recent development in software and hardware that enable more advanced machine learning, are fulfilling the prerequisites of developing and deploying process automation integrated machine learning applications which improve existing functionalities or even implement artificial intelligence. In this master's thesis, a framework is designed and implemented on a proof-of-concept level, to enable easy acquisition of process data to be used with modern machine learning libraries, and to also enable scalable online deployment of the trained models. The literature part of the thesis concentrates on studying the current state and approaches for digital advisory systems for process operators, as a potential application to be developed on the machine learning framework. The literature study shows that the approaches for process operators' decision support tools have shifted from rule-based and knowledge-based methods to machine learning. However, no standard methods can be concluded, and most of the use cases are quite application-specific. In the developed machine learning framework, both commercial software and open source components with permissive licenses are used. Data is acquired over OPC UA and then processed in Python, which is currently almost the de facto standard language in data analytics. Microservice architecture with containerization is used in the online deployment, and in a qualitative evaluation, it proved to be a versatile and functional solution.Koneoppimisella voidaan osoittaa olevan useita hyödyllisiä käyttökohteita prosessiteollisuudessa, esimerkiksi prosessinohjaukseen liittyvissä sovelluksissa. Jatkuvasti kerääntyvä prosessidata ja toisaalta koneoppimiseen soveltuvien ohjelmistojen sekä myös laitteistojen viimeaikainen kehitys johtavat tilanteeseen, jossa prosessiautomaatioon liitettyjen koneoppimissovellusten avulla on mahdollista parantaa nykyisiä toiminnallisuuksia tai jopa toteuttaa tekoälysovelluksia. Tässä diplomityössä suunniteltiin ja toteutettiin prototyypin tasolla koneoppimiskehys, jonka avulla on helppo käyttää prosessidataa yhdessä nykyaikaisten koneoppimiskirjastojen kanssa. Kehys mahdollistaa myös koneopittujen mallien skaalautuvan käyttöönoton. Diplomityön kirjallisuusosa keskittyy prosessioperaattoreille tarkoitettujen digitaalisten avustajajärjestelmien nykytilaan ja toteutustapoihin, avustajajärjestelmän tai sen päätöstukijärjestelmän ollessa yksi mahdollinen koneoppimiskehyksen päälle rakennettava ohjelma. Kirjallisuustutkimuksen mukaan prosessioperaattorin päätöstukijärjestelmien taustalla olevat menetelmät ovat yhä useammin koneoppimiseen perustuvia, aiempien sääntö- ja tietämyskantoihin perustuvien menetelmien sijasta. Selkeitä yhdenmukaisia lähestymistapoja ei kuitenkaan ole helposti pääteltävissä kirjallisuuden perusteella. Lisäksi useimmat tapausesimerkit ovat sovellettavissa vain kyseisissä erikoistapauksissa. Kehitetyssä koneoppimiskehyksessä on käytetty sekä kaupallisia että avoimen lähdekoodin komponentteja. Prosessidata haetaan OPC UA -protokollan avulla, ja sitä on mahdollista käsitellä Python-kielellä, josta on muodostunut lähes de facto -standardi data-analytiikassa. Kehyksen käyttöönottokomponentit perustuvat mikropalveluarkkitehtuuriin ja konttiteknologiaan, jotka osoittautuivat laadullisessa testauksessa monipuoliseksi ja toimivaksi toteutustavaksi

    A Knowledge Graph Based Integration Approach for Industry 4.0

    Get PDF
    The fourth industrial revolution, Industry 4.0 (I40) aims at creating smart factories employing among others Cyber-Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). Realizing smart factories according to the I40 vision requires intelligent human-to-machine and machine-to-machine communication. To achieve this communication, CPS along with their data need to be described and interoperability conflicts arising from various representations need to be resolved. For establishing interoperability, industry communities have created standards and standardization frameworks. Standards describe main properties of entities, systems, and processes, as well as interactions among them. Standardization frameworks classify, align, and integrate industrial standards according to their purposes and features. Despite being published by official international organizations, different standards may contain divergent definitions for similar entities. Further, when utilizing the same standard for the design of a CPS, different views can generate interoperability conflicts. Albeit expressive, standardization frameworks may represent divergent categorizations of the same standard to some extent, interoperability conflicts need to be resolved to support effective and efficient communication in smart factories. To achieve interoperability, data need to be semantically integrated and existing conflicts conciliated. This problem has been extensively studied in the literature. Obtained results can be applied to general integration problems. However, current approaches fail to consider specific interoperability conflicts that occur between entities in I40 scenarios. In this thesis, we tackle the problem of semantic data integration in I40 scenarios. A knowledge graphbased approach allowing for the integration of entities in I40 while considering their semantics is presented. To achieve this integration, there are challenges to be addressed on different conceptual levels. Firstly, defining mappings between standards and standardization frameworks; secondly, representing knowledge of entities in I40 scenarios described by standards; thirdly, integrating perspectives of CPS design while solving semantic heterogeneity issues; and finally, determining real industry applications for the presented approach. We first devise a knowledge-driven approach allowing for the integration of standards and standardization frameworks into an Industry 4.0 knowledge graph (I40KG). The standards ontology is used for representing the main properties of standards and standardization frameworks, as well as relationships among them. The I40KG permits to integrate standards and standardization frameworks while solving specific semantic heterogeneity conflicts in the domain. Further, we semantically describe standards in knowledge graphs. To this end, standards of core importance for I40 scenarios are considered, i.e., the Reference Architectural Model for I40 (RAMI4.0), AutomationML, and the Supply Chain Operation Reference Model (SCOR). In addition, different perspectives of entities describing CPS are integrated into the knowledge graphs. To evaluate the proposed methods, we rely on empirical evaluations as well as on the development of concrete use cases. The attained results provide evidence that a knowledge graph approach enables the effective data integration of entities in I40 scenarios while solving semantic interoperability conflicts, thus empowering the communication in smart factories

    Interoperability and machine-to-machine translation model with mappings to machine learning tasks

    Get PDF
    Modern large-scale automation systems integrate thousands to hundreds of thousands of physical sensors and actuators. Demands for more flexible reconfiguration of production systems and optimization across different information models, standards and legacy systems challenge current system interoperability concepts. Automatic semantic translation across information models and standards is an increasingly important problem that needs to be addressed to fulfill these demands in a cost-efficient manner under constraints of human capacity and resources in relation to timing requirements and system complexity. Here we define a translator-based operational interoperability model for interacting cyber-physical systems in mathematical terms, which includes system identification and ontology-based translation as special cases. We present alternative mathematical definitions of the translator learning task and mappings to similar machine learning tasks and solutions based on recent developments in machine learning. Possibilities to learn translators between artefacts without a common physical context, for example in simulations of digital twins and across layers of the automation pyramid are briefly discussed.Comment: 7 pages, 2 figures, 1 table, 1 listing. Submitted to the IEEE International Conference on Industrial Informatics 2019, INDIN'1

    Distributed Planning for Self-Organizing Production Systems

    Get PDF
    Für automatisierte Produktionsanlagen gibt es einen fundamentalen Tradeoff zwischen Effizienz und Flexibilität. In den meisten Fällen sind die Abläufe nicht nur durch den physischen Aufbau der Produktionsanlage, sondern auch durch die spezielle zugeschnittene Programmierung der Anlagensteuerung fest vorgegeben. Änderungen müssen aufwändig in einer Vielzahl von Systemen nachgezogen werden. Das macht die Herstellung kleiner Stückzahlen unrentabel. In dieser Dissertation wird ein Ansatz entwickelt, um eine automatische Anpassung des Verhaltens von Produktionsanlagen an wechselnde Aufträge und Rahmenbedingungen zu erreichen. Dabei kommt das Prinzip der Selbstorganisation durch verteilte Planung zum Einsatz. Die aufeinander aufbauenden Ergebnisse der Dissertation sind wie folgt: 1. Es wird ein Modell von Produktionsanlagen entwickelt, dass nahtlos von der detaillierten Betrachtung physikalischer Produktionsprozesse bis hin zu Lieferbeziehungen zwischen Unternehmen skaliert. Im Vergleich zu existierenden Modellen von Produktionsanlagen werden weniger limitierende Annahmen gestellt. In diesem Sinne ist der Modellierungsansatz ein Kandidat für eine häufig geforderte "Theorie der Produktion". 2. Für die so modellierten Szenarien wird ein Algorithmus zur Optimierung der nebenläufigen Abläufe entwickelt. Der Algorithmus verbindet Techniken für die kombinatorische und die kontinuierliche Optimierung: Je nach Detailgrad und Ausgestaltung des modellierten Szenarios kann der identische Algorithmus kombinatorische Fertigungsfeinplanung (Scheduling) vornehmen, weltweite Lieferbeziehungen unter Einbezug von Unsicherheiten und Risiko optimieren und physikalische Prozesse prädiktiv regeln. Dafür werden Techniken der Monte-Carlo Baumsuche (die auch bei Deepminds Alpha Go zum Einsatz kommen) weiterentwickelt. Durch Ausnutzung zusätzlicher Struktur in den Modellen skaliert der Ansatz auch auf große Szenarien. 3. Der Planungsalgorithmus wird auf die verteilte Optimierung durch unabhängige Agenten übertragen. Dafür wird die sogenannte "Nutzen-Propagation" als Koordinations-Mechanismus entwickelt. Diese ist von der Belief-Propagation zur Inferenz in Probabilistischen Graphischen Modellen inspiriert. Jeder teilnehmende Agent hat einen lokalen Handlungsraum, in dem er den Systemzustand beobachten und handelnd eingreifen kann. Die Agenten sind an der Maximierung der Gesamtwohlfahrt über alle Agenten hinweg interessiert. Die dafür notwendige Kooperation entsteht über den Austausch von Nachrichten zwischen benachbarten Agenten. Die Nachrichten beschreiben den erwarteten Nutzen für ein angenommenes Verhalten im Handlungsraum beider Agenten. 4. Es wird eine Beschreibung der wiederverwendbaren Fähigkeiten von Maschinen und Anlagen auf Basis formaler Beschreibungslogiken entwickelt. Ausgehend von den beschriebenen Fähigkeiten, sowie der vorliegenden Aufträge mit ihren notwendigen Produktionsschritten, werden ausführbare Aktionen abgeleitet. Die ausführbaren Aktionen, mit wohldefinierten Vorbedingungen und Effekten, kapseln benötigte Parametrierungen, programmierte Abläufe und die Synchronisation von Maschinen zur Laufzeit. Die Ergebnisse zusammenfassend werden Grundlagen für flexible automatisierte Produktionssysteme geschaffen -- in einer Werkshalle, aber auch über Standorte und Organisationen verteilt -- welche die ihnen innewohnenden Freiheitsgrade durch Planung zur Laufzeit und agentenbasierte Koordination gezielt einsetzen können. Der Bezug zur Praxis wird durch Anwendungsbeispiele hergestellt. Die Machbarkeit des Ansatzes wurde mit realen Maschinen im Rahmen des EU-Projekts SkillPro und in einer Simulationsumgebung mit weiteren Szenarien demonstriert

    A Conceptual Architecture for Enabling Future Self-Adaptive Service Systems

    Get PDF
    Dynamic integration methods for unknown data sources and services at system design time are currently primarily driven by technological standards. Hence, little emphasis is being placed on integration methods. However, the combination of heterogeneous data sources and services offered by devices across domains is hard to standardize. In this paper, we will shed light on the interplay of self-adaptive system architectures as well as bottom-up, incremental integration methods relying on formal knowledge bases. An incremental integration method has direct influences on both the system architecture itself and the way these systems are engineered and operated during design and runtime. Our findings are evaluated in the context of a case study that uses an adapted bus architecture including two tool prototypes. In addition, we illustrate conceptually how control loops such as MAPE-K can be enriched with machine-readable integration knowledge

    Automating Security Risk and Requirements Management for Cyber-Physical Systems

    Get PDF
    Cyber-physische Systeme ermöglichen zahlreiche moderne Anwendungsfälle und Geschäftsmodelle wie vernetzte Fahrzeuge, das intelligente Stromnetz (Smart Grid) oder das industrielle Internet der Dinge. Ihre Schlüsselmerkmale Komplexität, Heterogenität und Langlebigkeit machen den langfristigen Schutz dieser Systeme zu einer anspruchsvollen, aber unverzichtbaren Aufgabe. In der physischen Welt stellen die Gesetze der Physik einen festen Rahmen für Risiken und deren Behandlung dar. Im Cyberspace gibt es dagegen keine vergleichbare Konstante, die der Erosion von Sicherheitsmerkmalen entgegenwirkt. Hierdurch können sich bestehende Sicherheitsrisiken laufend ändern und neue entstehen. Um Schäden durch böswillige Handlungen zu verhindern, ist es notwendig, hohe und unbekannte Risiken frühzeitig zu erkennen und ihnen angemessen zu begegnen. Die Berücksichtigung der zahlreichen dynamischen sicherheitsrelevanten Faktoren erfordert einen neuen Automatisierungsgrad im Management von Sicherheitsrisiken und -anforderungen, der über den aktuellen Stand der Wissenschaft und Technik hinausgeht. Nur so kann langfristig ein angemessenes, umfassendes und konsistentes Sicherheitsniveau erreicht werden. Diese Arbeit adressiert den dringenden Bedarf an einer Automatisierungsmethodik bei der Analyse von Sicherheitsrisiken sowie der Erzeugung und dem Management von Sicherheitsanforderungen für Cyber-physische Systeme. Das dazu vorgestellte Rahmenwerk umfasst drei Komponenten: (1) eine modelbasierte Methodik zur Ermittlung und Bewertung von Sicherheitsrisiken; (2) Methoden zur Vereinheitlichung, Ableitung und Verwaltung von Sicherheitsanforderungen sowie (3) eine Reihe von Werkzeugen und Verfahren zur Erkennung und Reaktion auf sicherheitsrelevante Situationen. Der Schutzbedarf und die angemessene Stringenz werden durch die Sicherheitsrisikobewertung mit Hilfe von Graphen und einer sicherheitsspezifischen Modellierung ermittelt und bewertet. Basierend auf dem Modell und den bewerteten Risiken werden anschließend fundierte Sicherheitsanforderungen zum Schutz des Gesamtsystems und seiner Funktionalität systematisch abgeleitet und in einer einheitlichen, maschinenlesbaren Struktur formuliert. Diese maschinenlesbare Struktur ermöglicht es, Sicherheitsanforderungen automatisiert entlang der Lieferkette zu propagieren. Ebenso ermöglicht sie den effizienten Abgleich der vorhandenen Fähigkeiten mit externen Sicherheitsanforderungen aus Vorschriften, Prozessen und von Geschäftspartnern. Trotz aller getroffenen Maßnahmen verbleibt immer ein gewisses Restrisiko einer Kompromittierung, worauf angemessen reagiert werden muss. Dieses Restrisiko wird durch Werkzeuge und Prozesse adressiert, die sowohl die lokale und als auch die großräumige Erkennung, Klassifizierung und Korrelation von Vorfällen verbessern. Die Integration der Erkenntnisse aus solchen Vorfällen in das Modell führt häufig zu aktualisierten Bewertungen, neuen Anforderungen und verbessert weitere Analysen. Abschließend wird das vorgestellte Rahmenwerk anhand eines aktuellen Anwendungsfalls aus dem Automobilbereich demonstriert.Cyber-Physical Systems enable various modern use cases and business models such as connected vehicles, the Smart (power) Grid, or the Industrial Internet of Things. Their key characteristics, complexity, heterogeneity, and longevity make the long-term protection of these systems a demanding but indispensable task. In the physical world, the laws of physics provide a constant scope for risks and their treatment. In cyberspace, on the other hand, there is no such constant to counteract the erosion of security features. As a result, existing security risks can constantly change and new ones can arise. To prevent damage caused by malicious acts, it is necessary to identify high and unknown risks early and counter them appropriately. Considering the numerous dynamic security-relevant factors requires a new level of automation in the management of security risks and requirements, which goes beyond the current state of the art. Only in this way can an appropriate, comprehensive, and consistent level of security be achieved in the long term. This work addresses the pressing lack of an automation methodology for the security-risk assessment as well as the generation and management of security requirements for Cyber-Physical Systems. The presented framework accordingly comprises three components: (1) a model-based security risk assessment methodology, (2) methods to unify, deduce and manage security requirements, and (3) a set of tools and procedures to detect and respond to security-relevant situations. The need for protection and the appropriate rigor are determined and evaluated by the security risk assessment using graphs and a security-specific modeling. Based on the model and the assessed risks, well-founded security requirements for protecting the overall system and its functionality are systematically derived and formulated in a uniform, machine-readable structure. This machine-readable structure makes it possible to propagate security requirements automatically along the supply chain. Furthermore, they enable the efficient reconciliation of present capabilities with external security requirements from regulations, processes, and business partners. Despite all measures taken, there is always a slight risk of compromise, which requires an appropriate response. This residual risk is addressed by tools and processes that improve the local and large-scale detection, classification, and correlation of incidents. Integrating the findings from such incidents into the model often leads to updated assessments, new requirements, and improves further analyses. Finally, the presented framework is demonstrated by a recent application example from the automotive domain
    corecore