1,032 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Software Product Lines for Multi-Cloud Microservices Configuration

    Get PDF
    National audienceMulti-cloud computing [2] and microservices architecture [1] are current trends in the development of cloud applications. Multiple clouds can be used to reduce provider dependence, comply with constraints and regulations, or optimize costs and quality of service. Microservices architecture improves resource usage and scalability by decomposing applications into small highly decoupled services that may be developed, deployed and scaled independently. However, setting up a multi-cloud environment to deploy and run these applications is very complex. Developers must consider the many different available cloud providers' offers to select a set of providers, and configure them accordingly to deploy each of the application services. Microservices may be developed by different teams, using different technologies, and therefore may require different features from cloud providers. Moreover, as the cloud market evolves, providers' features may be introduced or retired, requiring configuration changes. Taking all these factors into account to setup a multi-cloud environment to deploy and run a microservices application is an error-prone and time consuming task, which calls for supporting tools. In this workshop, we will present our approach to automate the setup of multi-cloud environments [4]. Our approach employs software product lines [3] principles and ontology reasoning to get from a high-level description of multi-cloud requirements to a selection of cloud providers and their configurations. In addition, we will highlight current challenges and ongoing work to build self-adaptive multi-cloud environments, capable of identifying optimization opportunities as application requirements and cloud market evolve

    A highly-available and scalable microservice architecture for access management

    Get PDF
    Access management is a key aspect of providing secure services and applications in information technology. Ensuring secure access is particularly challenging in a cloud environment wherein resources are scaled dynamically. In fact keeping track of dynamic cloud instances and administering access to them requires careful coordination and mechanisms to ensure reliable operations. PrivX is a commercial offering from SSH Communications and Security Oyj that automatically scans and keeps track of the cloud instances and manages access to them. PrivX is currently built on the microservices approach, wherein the application is structured as a collection of loosely coupled services. However, PrivX requires external modules and with specific capabilities to ensure high availability. Moreover, complex scripts are required to monitor the whole system. The goal of this thesis is to make PrivX highly-available and scalable by using a container orchestration framework. To this end, we first conduct a detailed study of mostly widely used container orchestration frameworks: Kubernetes, Docker Swarm and Nomad. We then select Kubernetes based on a feature evaluation relevant to the considered scenario. We package the individual components of PrivX, including its database, into Docker containers and deploy them on a Kubernetes cluster. We also build a prototype system to demonstrate how microservices can be managed on a Kubernetes cluster. Additionally, an auto scaling tool is created to scale specific services based on predefined rules. Finally, we evaluate the service recovery time for each of the services in PrivX, both in the RPM deployment model and the prototype Kubernetes deployment model. We find that there is no significant difference in service recovery time between the two models. However, Kubernetes ensured high availability of the services. We find that Kubernetes is the preferred mode for deploying PrivX and it makes PrivX highly available and scalable

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table

    Adapting Microservices in the Cloud with FaaS

    Get PDF
    This project involves benchmarking, microservices and Function-as-a-service (FaaS) across the dimensions of performance and cost. In order to do a comparison this paper proposes a benchmark framework

    RobotKube: Orchestrating Large-Scale Cooperative Multi-Robot Systems with Kubernetes and ROS

    Full text link
    Modern cyber-physical systems (CPS) such as Cooperative Intelligent Transport Systems (C-ITS) are increasingly defined by the software which operates these systems. In practice, microservice architectures can be employed, which may consist of containerized microservices running in a cluster comprised of robots and supporting infrastructure. These microservices need to be orchestrated dynamically according to ever changing requirements posed at the system. Additionally, these systems are embedded in DevOps processes aiming at continually updating and upgrading both the capabilities of CPS components and of the system as a whole. In this paper, we present RobotKube, an approach to orchestrating containerized microservices for large-scale cooperative multi-robot CPS based on Kubernetes. We describe how to automate the orchestration of software across a CPS, and include the possibility to monitor and selectively store relevant accruing data. In this context, we present two main components of such a system: an event detector capable of, e.g., requesting the deployment of additional applications, and an application manager capable of automatically configuring the required changes in the Kubernetes cluster. By combining the widely adopted Kubernetes platform with the Robot Operating System (ROS), we enable the use of standard tools and practices for developing, deploying, scaling, and monitoring microservices in C-ITS. We demonstrate and evaluate RobotKube in an exemplary and reproducible use case that we make publicly available at https://github.com/ika-rwth-aachen/robotkube .Comment: 7 pages, 2 figures, 2 tables; Accepted to be published as part of the 26th IEEE International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, September 24-28, 202

    Achieving Continuous Delivery of Immutable Containerized Microservices with Mesos/Marathon

    Get PDF
    In the recent years, DevOps methodologies have been introduced to extend the traditional agile principles which have brought up on us a paradigm shift in migrating applications towards a cloud-native architecture. Today, microservices, containers, and Continuous Integration/Continuous Delivery have become critical to any organization’s transformation journey towards developing lean artifacts and dealing with the growing demand of pushing new features, iterating rapidly to keep the customers happy. Traditionally, applications have been packaged and delivered in virtual machines. But, with the adoption of microservices architectures, containerized applications are becoming the standard way to deploy services to production. Thanks to container orchestration tools like Marathon, containers can now be deployed and monitored at scale with ease. Microservices and Containers along with Container Orchestration tools disrupt and redefine DevOps, especially the delivery pipeline. This Master’s thesis project focuses on deploying highly scalable microservices packed as immutable containers onto a Mesos cluster using a container orchestrating framework called Marathon. This is achieved by implementing a CI/CD pipeline and bringing in to play some of the greatest and latest practices and tools like Docker, Terraform, Jenkins, Consul, Vault, Prometheus, etc. The thesis is aimed to showcase why we need to design systems around microservices architecture, packaging cloud-native applications into containers, service discovery and many other latest trends within the DevOps realm that contribute to the continuous delivery pipeline. At BetterDoctor Inc., it is observed that this project improved the avg. release cycle, increased team members’ productivity and collaboration, reduced infrastructure costs and deployment failure rates. With the CD pipeline in place along with container orchestration tools it has been observed that the organisation could achieve Hyperscale computing as and when business demands
    • 

    corecore