79 research outputs found

    A microservice architecture for predictive analytics in manufacturing

    Get PDF
    Abstract This paper discusses on the design, development and deployment of a flexible and modular platform supporting smart predictive maintenance operations, enabled by microservices architecture and virtualization technologies. Virtualization allows the platform to be deployed in a multi-tenant environment, while facilitating resource isolation and independency from specific technologies or services. Moreover, the proposed platform supports scalable data storage supporting an effective and efficient management of large volume of Industry 4.0 data. Methodologies of data-driven predictive maintenance are provided to the user as-a-service, facilitating offline training and online execution of pre-trained analytics models, while the connection of the raw data to contextual information support their understanding and interpretation, while guaranteeing interoperability across heterogeneous systems. A use case related to the predictive maintenance operations of a robotic manipulator is examined to demonstrate the effectiveness and the efficiency of the proposed platform

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    Burst-aware predictive autoscaling for containerized microservices

    Get PDF
    Autoscaling methods are used for cloud-hosted applications to dynamically scale the allocated resources for guaranteeing Quality-of-Service (QoS). The public-facing application serves dynamic workloads, which contain bursts and pose challenges for autoscaling methods to ensure application performance. Existing State-of-the-art autoscaling methods are burst-oblivious to determine and provision the appropriate resources. For dynamic workloads, it is hard to detect and handle bursts online for maintaining application performance. In this article, we propose a novel burst-aware autoscaling method which detects burst in dynamic workloads using workload forecasting, resource prediction, and scaling decision making while minimizing response time service-level objectives (SLO) violations. We evaluated our approach through a trace-driven simulation, using multiple synthetic and realistic bursty workloads for containerized microservices, improving performance when comparing against existing state-of-the-art autoscaling methods. Such experiments show an increase of × 1.09 in total processed requests, a reduction of × 5.17 for SLO violations, and an increase of × 0.767 cost as compared to the baseline method.This work was partially supported by the European Research Council (ERC) under the EU Horizon 2020 programme (GA 639595), the Spanish Ministry of Economy, Industry and Competitiveness (TIN2015-65316-P and IJCI2016-27485) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Implementation and evaluation of a hybrid microservice infrastructure

    Get PDF
    Large-scale computing platforms, like the IBM System z mainframe, are often administrated in an out-of-band manner, with a large portion of the systems management software running on dedicated servers which cause extra hardware costs. Splitting up systems management applications into smaller services and spreading them over the platform itself likewise is an approach that potentially helps with increasing the utilization of platform-internal resources, while at the same time lowering the need for external server hardware, which would reduce the extra costs significantly. However, with regard to IBM System z, this raises the general question how a great number of critical services can be run and managed reliably on a heterogeneous computing landscape, as out-of-band servers and internal processor modules do not share the same processor architecture. In this thesis, we introduce our prototypical design of a microservice infrastructure for multi-architecture environments, which we completely built upon preexisting open source projects and features they already bring along. We present how scheduling of services according to application-specific requirements and particularities can be achieved in a way that offers maximum transparency and comfort for platform operators and users.Die Adminstration von Großrechnerplattformen, wie beispielsweise des IBM System z Mainframes, erfolgt oftmals anhand einer Out-of-Band Herangehenweise, bei der ein Großteil der Systems Management Software auf dedizierten Servern betrieben wird, welche zusätzliche Hardwarekosten verursachen. Indem Systems Management Applikationen in kleinere Services aufgetrennt und gleichermaßen über die Plattform selbst verteilt werden, kann möglicherweise die Auslastung der Plattform-internen Ressourcen erhöht und gleichzeitig der Bedarf nach externer Serverhardware gesenkt werden, was die zusätzlichen Kosten deutlich reduzieren würde. Im Fall von IBM System z wirft dies allerdings die allgemeine Frage auf, wie eine große Anzahl an kritischen Services auf einer heterogenen Rechnerlandschaft zuverlässig ausgeführt und verwaltet werden kann, da Out-of-Band Server und interne Prozessormodule auf unterschiedlichen Prozessorarchitekturen basieren. In dieser Masterarbeit stellen wir unseren prototypischen Entwurf einer Microservice-Infrasturktur für Multiarchitekturumgebungen vor, der vollständig auf bereits existierende Open-Source-Projekte und darin verfügbare Features aufgebaut ist. Wir demonstrieren, wie Scheduling von Services entsprechend anwendungsspezifischer Anforderungen und Besonderheiten in einer Weise umgesetzt werden kann, die sowohl Plattformbetreibern als auch -nutzern maximale Transparenz sowie größtmöglichen Komfort bietet

    Achieving Continuous Delivery of Immutable Containerized Microservices with Mesos/Marathon

    Get PDF
    In the recent years, DevOps methodologies have been introduced to extend the traditional agile principles which have brought up on us a paradigm shift in migrating applications towards a cloud-native architecture. Today, microservices, containers, and Continuous Integration/Continuous Delivery have become critical to any organization’s transformation journey towards developing lean artifacts and dealing with the growing demand of pushing new features, iterating rapidly to keep the customers happy. Traditionally, applications have been packaged and delivered in virtual machines. But, with the adoption of microservices architectures, containerized applications are becoming the standard way to deploy services to production. Thanks to container orchestration tools like Marathon, containers can now be deployed and monitored at scale with ease. Microservices and Containers along with Container Orchestration tools disrupt and redefine DevOps, especially the delivery pipeline. This Master’s thesis project focuses on deploying highly scalable microservices packed as immutable containers onto a Mesos cluster using a container orchestrating framework called Marathon. This is achieved by implementing a CI/CD pipeline and bringing in to play some of the greatest and latest practices and tools like Docker, Terraform, Jenkins, Consul, Vault, Prometheus, etc. The thesis is aimed to showcase why we need to design systems around microservices architecture, packaging cloud-native applications into containers, service discovery and many other latest trends within the DevOps realm that contribute to the continuous delivery pipeline. At BetterDoctor Inc., it is observed that this project improved the avg. release cycle, increased team members’ productivity and collaboration, reduced infrastructure costs and deployment failure rates. With the CD pipeline in place along with container orchestration tools it has been observed that the organisation could achieve Hyperscale computing as and when business demands

    Building an IoT platform based on service containerisation

    Get PDF
    IoT platforms have become quite complex from a technical viewpoint, becoming the cornerstone for information sharing, storing, and indexing given the unprecedented scale of smart services being available by massive deployments of a large set of data-enabled devices. These platforms rely on structured formats that exploit standard technologies to deal with the gathered data, thus creating the need for carefully designed customised systems that can handle thousands of heterogeneous data sensors/actuators, multiple processing frameworks, and storage solutions. We present the SCoT2.0 platform, a generic-purpose IoT Platform that can acquire, process, and visualise data using methods adequate for both real-time processing and long-term Machine Learning (ML)-based analysis. Our goal is to develop a large-scale system that can be applied to multiple real-world scenarios and is potentially deployable on private clouds for multiple verticals. Our approach relies on extensive service containerisation, and we present the different design choices, technical challenges, and solutions found while building our own IoT platform. We validate this platform supporting two very distinct IoT projects (750 physical devices), and we analyse scaling issues within the platform components.publishe

    Innovative techniques for deployment of microservices in cloud-edge environment

    Get PDF
    PhD ThesisThe evolution of microservice architecture allows complex applications to be structured into independent modular components (microservices) making them easier to develop and manage. Complemented with containers, microservices can be deployed across any cloud and edge environment. Although containerized microservices are getting popular in industry, less research is available specially in the area of performance characterization and optimized deployment of microservices. Depending on the application type (e.g. web, streaming) and the provided functionalities (e.g. ltering, encryption/decryption, storage), microservices are heterogeneous with speci c functional and Quality of Service (QoS) requirements. Further, cloud and edge environments are also complex with a huge number of cloud providers and edge devices along with their host con gurations. Due to these complexities, nding a suitable deployment solution for microservices becomes challenging. To handle the deployment of microservices in cloud and edge environments, this thesis presents multilateral research towards microservice performance characterization, run-time evaluation and system orchestration. Considering a variety of applications, numerous algorithms and policies have been proposed, implemented and prototyped. The main contributions of this thesis are given below: Characterizes the performance of containerized microservices considering various types of interference in the cloud environment. Proposes and models an orchestrator, SDBO for benchmarking simple webapplication microservices in a multi-cloud environment. SDBO is validated using an e-commerce test web-application. Proposes and models an advanced orchestrator, GeoBench for the deployment of complex web-application microservices in a multi-cloud environment. GeoBench is validated using a geo-distributed test web-application. - i - Proposes and models a run-time deployment framework for distributed streaming application microservices in a hybrid cloud-edge environment. The model is validated using a real-world healthcare analytics use case for human activity recognition.

    On-premise containerized, light-weight software solutions for Biomedicine

    Get PDF
    Bioinformatics software systems are critical tools for analysing large-scale biological data, but their design and implementation can be challenging due to the need for reliability, scalability, and performance. This thesis investigates the impact of several software approaches on the design and implementation of bioinformatics software systems. These approaches include software patterns, microservices, distributed computing, containerisation and container orchestration. The research focuses on understanding how these techniques affect bioinformatics software systems’ reliability, scalability, performance, and efficiency. Furthermore, this research highlights the challenges and considerations involved in their implementation. This study also examines potential solutions for implementing container orchestration in bioinformatics research teams with limited resources and the challenges of using container orchestration. Additionally, the thesis considers microservices and distributed computing and how these can be optimised in the design and implementation process to enhance the productivity and performance of bioinformatics software systems. The research was conducted using a combination of software development, experimentation, and evaluation. The results show that implementing software patterns can significantly improve the code accessibility and structure of bioinformatics software systems. Specifically, microservices and containerisation also enhanced system reliability, scalability, and performance. Additionally, the study indicates that adopting advanced software engineering practices, such as model-driven design and container orchestration, can facilitate efficient and productive deployment and management of bioinformatics software systems, even for researchers with limited resources. Overall, we develop a software system integrating all our findings. Our proposed system demonstrated the ability to address challenges in bioinformatics. The thesis makes several key contributions in addressing the research questions surrounding the design, implementation, and optimisation of bioinformatics software systems using software patterns, microservices, containerisation, and advanced software engineering principles and practices. Our findings suggest that incorporating these technologies can significantly improve bioinformatics software systems’ reliability, scalability, performance, efficiency, and productivity.Bioinformatische Software-Systeme stellen bedeutende Werkzeuge für die Analyse umfangreicher biologischer Daten dar. Ihre Entwicklung und Implementierung kann jedoch aufgrund der erforderlichen Zuverlässigkeit, Skalierbarkeit und Leistungsfähigkeit eine Herausforderung darstellen. Das Ziel dieser Arbeit ist es, die Auswirkungen von Software-Mustern, Microservices, verteilten Systemen, Containerisierung und Container-Orchestrierung auf die Architektur und Implementierung von bioinformatischen Software-Systemen zu untersuchen. Die Forschung konzentriert sich darauf, zu verstehen, wie sich diese Techniken auf die Zuverlässigkeit, Skalierbarkeit, Leistungsfähigkeit und Effizienz von bioinformatischen Software-Systemen auswirken und welche Herausforderungen mit ihrer Konzeptualisierungen und Implementierung verbunden sind. Diese Arbeit untersucht auch potenzielle Lösungen zur Implementierung von Container-Orchestrierung in bioinformatischen Forschungsteams mit begrenzten Ressourcen und die Einschränkungen bei deren Verwendung in diesem Kontext. Des Weiteren werden die Schlüsselfaktoren, die den Erfolg von bioinformatischen Software-Systemen mit Containerisierung, Microservices und verteiltem Computing beeinflussen, untersucht und wie diese im Design- und Implementierungsprozess optimiert werden können, um die Produktivität und Leistung bioinformatischer Software-Systeme zu steigern. Die vorliegende Arbeit wurde mittels einer Kombination aus Software-Entwicklung, Experimenten und Evaluation durchgeführt. Die erzielten Ergebnisse zeigen, dass die Implementierung von Software-Mustern, die Zuverlässigkeit und Skalierbarkeit von bioinformatischen Software-Systemen erheblich verbessern kann. Der Einsatz von Microservices und Containerisierung trug ebenfalls zur Steigerung der Zuverlässigkeit, Skalierbarkeit und Leistungsfähigkeit des Systems bei. Darüber hinaus legt die Arbeit dar, dass die Anwendung von SoftwareEngineering-Praktiken, wie modellgesteuertem Design und Container-Orchestrierung, die effiziente und produktive Bereitstellung und Verwaltung von bioinformatischen Software-Systemen erleichtern kann. Zudem löst die Implementierung dieses SoftwareSystems, Herausforderungen für Forschungsgruppen mit begrenzten Ressourcen. Insgesamt hat das System gezeigt, dass es in der Lage ist, Herausforderungen im Bereich der Bioinformatik zu bewältigen und stellt somit ein wertvolles Werkzeug für Forscher in diesem Bereich dar. Die vorliegende Arbeit leistet mehrere wichtige Beiträge zur Beantwortung von Forschungsfragen im Zusammenhang mit dem Entwurf, der Implementierung und der Optimierung von Software-Systemen für die Bioinformatik unter Verwendung von Prinzipien und Praktiken der Softwaretechnik. Unsere Ergebnisse deuten darauf hin, dass die Einbindung dieser Technologien die Zuverlässigkeit, Skalierbarkeit, Leistungsfähigkeit, Effizienz und Produktivität bioinformatischer Software-Systeme erheblich verbessern kann
    corecore