35 research outputs found

    Towards quantifiable boundaries for elastic horizontal scaling of microservices

    Get PDF
    One of the most useful features of a microservices architecture is its versatility to scale horizontally. However, not all services scale in or out uniformly. The performance of an application composed of microservices depends largely on a suitable combination of replica count and resource capacity. In practice, this implies limitations to the efficiency of autoscalers which often overscale based on an isolated consideration of single service metrics. Consequently, application providers pay more than necessary despite zero gain in overall performance. Solving this issue requires an application-specific determination of scaling limits due to the general infeasibility of an application-agnostic solution. In this paper, we study microservices scalability, the auto-scaling of containers as microservice implementations and the relation between the number of replicas and the resulting application task performance. We contribute a replica count determination solution with a mathematical approach. Furthermore, we offer a calibration software tool which places scalability boundaries into declarative composition descriptions of applications ready to be consumed by cloud platforms

    Arquitectura basada en Microservicios y DevOps para una ingeniería de software continua

    Get PDF
    Microservices are conceived as an architectural style focused on developing applications through a set of services, independent, scalable, collaborative, evolutionary, capable of adapting to complex ecosystems. On the other hand, DevOps is a paradigm that uses a set of principles focused on the continuous delivery and integration of software, this implies a new culture to develop and deploy software in highly collaborative and agile contexts aimed at reducing the gap between development and operations. It is in this context that the present work proposes an Architecture based on Microservices and DevOps for continuous software engineering and applies the proposal through a case study with the participation of development teams formed by the students of the Workshop courses of Software and Systems Construction of the academic semesters: 2018-1, 2018-2, 2019-1, 2019-2 and led by the authors of this research; As a result of value there is a software product consisting of a set of Apps implemented with leading stack technologies under a disruptive approach.Los microservicios se conciben como un estilo arquitectónico enfocado a desarrollar aplicaciones mediante un conjunto de servicios, independientes, escalables, colaborativos, evolutivos, capaces de autoadaptarse a ecosistemas complejos. Por otro lado, DevOps es un paradigma que utiliza un conjunto de principios enfocado en la entrega e integración continua de software, esto implica una nueva cultura para desarrollar y desplegar software en contextos altamente colaborativos y agiles orientados a reducir la brecha que existe entre el desarrollo y las operaciones. Es en este contexto que el presente trabajo propone una Arquitectura basada en Microservicios y DevOps para una ingeniería de software continua y aplica la propuesta mediante un caso de estudio con la participación de equipos de desarrollo conformados por los estudiantes de los cursos de Taller de Construccion de Software y de Sistemas de los semestre académicos: 2018-1, 2018-2, 2019-1, 2019-2 y liderados por los autores de la presente investigación; como resultado de valor se tiene un producto de software constituido por un conjunto de Apps implementado con tecnologías stack lideres bajo un enfoque disruptivo

    Adoption of microservices in industrial information systems: a systematic literature review

    Get PDF
    The internet, digitalization and globalization have transformed customer expectations and the way business is done. Product life cycles have shortened, products need to be customizable, and the production needs to be scalable. These changes reflect also to the industrial operations. Quick technological advancements have increased the role of software in industrial facilities. The software in use has to enable untraditional flexibility, interoperability and scalability. Microservices based architecture has been seen as the state of the art way for developing flexible, interoperable and scalable software. Microservices have been applied to cloud native applications for consumers with enormous success. The goal of this thesis is to analyze how to adopt microservices to indstrial information systems. General information and characteristics of microservices are provided as background information and a systematic literature review is conducted to answer the research problem. Material for the systematic literature review was found from multiple digital libraries and 17 scientific papers matched the set inclusion cirteria. The material was then analyzed with an extensively documentated method. The thesis brought together the available publications on the topic. Guidelines for adopting microservices to industrial information systems were derived based on the analysis. Real time applications need special attention when using microservices architecture, the developers need to use proper tools for the tasks, and the developers and users need to be properly introduced to service-oriented systems. Based on this thesis microservices seems like a suitable approach for developing flexible industrial information systems, which satisfy the new business requirements

    Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems

    Get PDF
    This thesis presents an approach for the design time analysis of energy efficiency for static and self-adaptive software systems. The quality characteristics of a software system, such as performance and operating costs, strongly depend upon its architecture. Software architecture is a high-level view on software artifacts that reflects essential quality characteristics of a system under design. Design decisions made on an architectural level have a decisive impact on the quality of a system. Revising architectural design decisions late into development requires significant effort. Architectural analyses allow software architects to reason about the impact of design decisions on quality, based on an architectural description of the system. An essential quality goal is the reduction of cost while maintaining other quality goals. Power consumption accounts for a significant part of the Total Cost of Ownership (TCO) of data centers. In 2010, data centers contributed 1.3% of the world-wide power consumption. However, reasoning on the energy efficiency of software systems is excluded from the systematic analysis of software architectures at design time. Energy efficiency can only be evaluated once the system is deployed and operational. One approach to reduce power consumption or cost is the introduction of self-adaptivity to a software system. Self-adaptive software systems execute adaptations to provision costly resources dependent on user load. The execution of reconfigurations can increase energy efficiency and reduce cost. If performed improperly, however, the additional resources required to execute a reconfiguration may exceed their positive effect. Existing architecture-level energy analysis approaches offer limited accuracy or only consider a limited set of system features, e.g., the used communication style. Predictive approaches from the embedded systems and Cloud Computing domain operate on an abstraction that is not suited for architectural analysis. The execution of adaptations can consume additional resources. The additional consumption can reduce performance and energy efficiency. Design time quality analyses for self-adaptive software systems ignore this transient effect of adaptations. This thesis makes the following contributions to enable the systematic consideration of energy efficiency in the architectural design of self-adaptive software systems: First, it presents a modeling language that captures power consumption characteristics on an architectural abstraction level. Second, it introduces an energy efficiency analysis approach that uses instances of our power consumption modeling language in combination with existing performance analyses for architecture models. The developed analysis supports reasoning on energy efficiency for static and self-adaptive software systems. Third, to ease the specification of power consumption characteristics, we provide a method for extracting power models for server environments. The method encompasses an automated profiling of servers based on a set of restrictions defined by the user. A model training framework extracts a set of power models specified in our modeling language from the resulting profile. The method ranks the trained power models based on their predicted accuracy. Lastly, this thesis introduces a systematic modeling and analysis approach for considering transient effects in design time quality analyses. The approach explicitly models inter-dependencies between reconfigurations, performance and power consumption. We provide a formalization of the execution semantics of the model. Additionally, we discuss how our approach can be integrated with existing quality analyses of self-adaptive software systems. We validated the accuracy, applicability, and appropriateness of our approach in a variety of case studies. The first two case studies investigated the accuracy and appropriateness of our modeling and analysis approach. The first study evaluated the impact of design decisions on the energy efficiency of a media hosting application. The energy consumption predictions achieved an absolute error lower than 5.5% across different user loads. Our approach predicted the relative impact of the design decision on energy efficiency with an error of less than 18.94%. The second case study used two variants of the Spring-based community case study system PetClinic. The case study complements the accuracy and appropriateness evaluation of our modeling and analysis approach. We were able to predict the energy consumption of both variants with an absolute error of no more than 2.38%. In contrast to the first case study, we derived all models automatically, using our power model extraction framework, as well as an extraction framework for performance models. The third case study applied our model-based prediction to evaluate the effect of different self-adaptation algorithms on energy efficiency. It involved scientific workloads executed in a virtualized environment. Our approach predicted the energy consumption with an error below 7.1%, even though we used coarse grained measurement data of low accuracy to train the input models. The fourth case study evaluated the appropriateness and accuracy of the automated model extraction method using a set of Big Data and enterprise workloads. Our method produced power models with prediction errors below 5.9%. A secondary study evaluated the accuracy of extracted power models for different Virtual Machine (VM) migration scenarios. The results of the fifth case study showed that our approach for modeling transient effects improved the prediction accuracy for a horizontally scaling application. Leveraging the improved accuracy, we were able to identify design deficiencies of the application that otherwise would have remained unnoticed

    Automated deployment of machine learning applications to the cloud

    Get PDF
    The use of machine learning (ML) as a key technology in artificial intelligence (AI) is becoming more and more important in the increasing digitalization of business processes. However, the majority of the development effort of ML applications is not related to the programming of the ML model, but to the creation of the server structure, which is responsible for a highly available and error-free productive operation of the ML application. The creation of such a server structure by the developers is time-consuming and complicated, because extensive configurations have to be made. Besides the creation of the server structure, it is also useful not to put new ML application versions directly into production, but to observe the behavior of the ML application with respect to unknown data for quality assurance. For example, the error rate as well as the CPU and RAM consumption should be checked. The goal of this thesis is to collect requirements for a suitable server structure and an automation mechanism that generates this server structure, deploys the ML application and allows to observe the behavior of a new ML application version based on real-time user data. For this purpose, a systematic literature review is conducted to investigate how the behavior of ML applications can be analyzed under the influence of real-time user data before their productive operation. Subsequently, in the context of the requirements analysis, a target-performance analysis is carried out in the department of a management consulting company in the automotive sector. Together with the results of the literature research, a list of user stories for the automation tool is determined and prioritized. The automation tool is implemented in the form of a Python console application that enables the desired functionality by using IaC (Infrastructure as code) and the AWS (Amazon Web Services) SDK in the cloud. The automation tool is finally evaluated in the department. The ten participants independently carry out predefined usage scenarios and then evaluate the tool using a questionnaire developed on the basis of the TAM model. The results of the evaluation are predominantly positive and the constructive feedback of the participants includes numerous interesting comments on possible adaptions and extensions of the automation tool

    The Essence of Software Engineering

    Get PDF
    Software Engineering; Software Development; Software Processes; Software Architectures; Software Managemen

    Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems

    Get PDF
    This work presents an approach for the architecture analysis of energy efficiency for static and self-adaptive software systems. It introduces a modeling language that captures consumption characteristics on an architectural level. The outlined analysis predicts the energy efficiency of systems described with this language. Lastly, this work introduces an approach for considering transient effects in design time architecture analyses

    Using Workload Prediction and Federation to Increase Cloud Utilization

    Get PDF
    The wide-spread adoption of cloud computing has changed how large-scale computing infrastructure is built and managed. Infrastructure-as-a-Service (IaaS) clouds consolidate different separate workloads onto a shared platform and provide a consistent quality of service by overprovisioning capacity. This additional capacity, however, remains idle for extended periods of time and represents a drag on system efficiency.The smaller scale of private IaaS clouds compared to public clouds exacerbates overprovisioning inefficiencies as opportunities for workload consolidation in private clouds are limited. Federation and cycle harvesting capabilities from computational grids help to improve efficiency, but to date have seen only limited adoption in the cloud due to a fundamental mismatch between the usage models of grids and clouds. Computational grids provide high throughput of queued batch jobs on a best-effort basis and enforce user priorities through dynamic job preemption, while IaaS clouds provide immediate feedback to user requests and make ahead-of-time guarantees about resource availability.We present a novel method to enable workload federation across IaaS clouds that overcomes this mismatch between grid and cloud usage models and improves system efficiency while also offering availability guarantees. We develop a new method for faster-than-realtime simulation of IaaS clouds to make predictions about system utilization and leverage this method to estimate the future availability of preemptible resources in the cloud. We then use these estimates to perform careful admission control and provide ahead-of-time bounds on the preemption probability of federated jobs executing on preemptible resources. Finally, we build an end-to-end prototype that addresses practical issues of workload federation and evaluate the prototype's efficacy using real-world traces from big data and compute-intensive production workloads
    corecore