7 research outputs found
Industrial DevOps
The visions and ideas of Industry 4.0 require a profound interconnection of
machines, plants, and IT systems in industrial production environments. This
significantly increases the importance of software, which is coincidentally one
of the main obstacles to the introduction of Industry 4.0. Lack of experience
and knowledge, high investment and maintenance costs, as well as uncertainty
about future developments cause many small and medium-sized enterprises
hesitating to adopt Industry 4.0 solutions. We propose Industrial DevOps as an
approach to introduce methods and culture of DevOps into industrial production
environments. The fundamental concept of this approach is a continuous process
of operation, observation, and development of the entire production
environment. This way, all stakeholders, systems, and data can thus be
integrated via incremental steps and adjustments can be made quickly.
Furthermore, we present the Titan software platform accompanied by a role model
for integrating production environments with Industrial DevOps. In two initial
industrial application scenarios, we address the challenges of energy
management and predictive maintenance with the methods, organizational
structures, and tools of Industrial DevOps.Comment: 10 page
Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures
Distributed stream processing engines are designed with a focus on
scalability to process big data volumes in a continuous manner. We present the
Theodolite method for benchmarking the scalability of distributed stream
processing engines. Core of this method is the definition of use cases that
microservices implementing stream processing have to fulfill. For each use
case, our method identifies relevant workload dimensions that might affect the
scalability of a use case. We propose to design one benchmark per use case and
relevant workload dimension. We present a general benchmarking framework, which
can be applied to execute the individual benchmarks for a given use case and
workload dimension. Our framework executes an implementation of the use case's
dataflow architecture for different workloads of the given dimension and
various numbers of processing instances. This way, it identifies how resources
demand evolves with increasing workloads. Within the scope of this paper, we
present 4 identified use cases, derived from processing Industrial Internet of
Things data, and 7 corresponding workload dimensions. We provide
implementations of 4 benchmarks with Kafka Streams and Apache Flink as well as
an implementation of our benchmarking framework to execute scalability
benchmarks in cloud environments. We use both for evaluating the Theodolite
method and for benchmarking Kafka Streams' and Flink's scalability for
different deployment options.Comment: 28 page
Abschlussbericht KMU-innovativ: Verbundprojekt Titan Industrial DevOps Plattform für iterative Prozessintegration und Automatisierung
Unternehmensprozesse zu digitalisieren und dabei eine IT-Infrastruktur aufzubauen, ist komplex. Neue, zum Teil teure Technologien werden eingesetzt, jedoch fehlen erprobte Praktiken. Die daraus entstehende Komplexität lässt sich mit dem klassischen Projektmodell nur ungenügend adressieren. Klassische Planungen basieren auf Annahmen, die sich oft zu spät und als falsch erweisen. Mechanismen, den einmal geplanten Weg zum gesetzten Ziel zu korrigieren, bietet das traditionelle Projektmodell nur eingeschränkt. Ziel des titan-Projekts ist die Integration von Entwicklungswerkzeugen und Betriebs-Technologie in eine Software-Plattform. Kombiniert mit innovativen „Industrial DevOps“- Methoden soll die komplexe Aufgabe einer iterativen Systemintegration im industriellen Umfeld erheblich vereinfacht werden. Im titan-Projekt ist der Prototyp einer Software-Plattform entstanden, die es industriellen Anwendern erlaubt, diese Praktiken auf Problemstellungen der Digitalisierung anzuwenden. Neben Zielen wie Sicherstellung und Überprüfbarkeit von Qualität, Widerstandsfähigkeit und Skalierbarkeit ist die Eliminierung des Vendor-Lock-In ein zentraler Aspekt des Projekts. Insbesondere werden Prozessanpassungen durch die Anwender mittels Flow Based Automation ermöglicht, neue Softwareversionen und Veränderungen können am System routinemäßig in Betrieb genommen werden und domänenspezifische Komponenten können für komplexe Aufgaben genutzt und verwaltet werden. Im Rahmen einer Community wird die titan-Open-Source-Plattform weiterentwickelt. Die während des Projekts entstandenen Innovationen werden so verfeinert und in verschiedenen Bereichen angewendet. Die Erfahrungen aus Projekten fließen in die Software ein und werden innerhalb der Community verbreitet
A scalable architecture for power consumption monitoring in industrial production environments
Detailed knowledge about the electrical power consumption in industrial production environments is a prerequisite to reduce and optimize their power consumption. Today's industrial production sites are equipped with a variety of sensors that, inter alia, monitor electrical power consumption in detail. However, these environments often lack an automated data collation and analysis. We present a system architecture that integrates different sensors and analyzes and visualizes the power consumption of devices, machines, and production plants. It is designed with a focus on scalability to support production environments of various sizes and to handle varying loads. We argue that a scalable architecture in this context must meet requirements for fault tolerance, extensibility, real-time data processing, and resource efficiency. As a solution, we propose a microservice-based architecture augmented by big data and stream processing techniques. Applying the fog computing paradigm, parts of it are deployed in an elastic, central cloud while other parts run directly, decentralized in the production environment. A prototype implementation of this architecture presents solutions how different kinds of sensors can be integrated and their measurements can be continuously aggregated. In order to make analyzed data comprehensible, it features a single-page web application that provides different forms of data visualization. We deploy this pilot implementation in the data center of a medium-sized enterprise, where we successfully monitor the power consumption of 16 servers. Furthermore, we show the scalability of our architecture with 20,000 simulated sensors.</p