1,210 research outputs found

    The OpenDC Microservice Simulator: Design, Implementation, and Experimentation

    Full text link
    Microservices is an architectural style that structures an application as a collection of loosely coupled services, making it easy for developers to build and scale their applications. The microservices architecture approach differs from the traditional monolithic style of treating software development as a single entity. Microservice architecture is becoming more and more adapted. However, microservice systems can be complex due to dependencies between the microservices, resulting in unpredictable performance at a large scale. Simulation is a cheap and fast way to investigate the performance of microservices in more detail. This study aims to build a microservices simulator for evaluating and comparing microservices based applications. The microservices reference architecture is designed. The architecture is used as the basis for a simulator. The simulator implementation uses statistical models to generate the workload. The compelling features added to the simulator include concurrent execution of microservices, configurable request depth, three load-balancing policies and four request execution order policies. This paper contains two experiments to show the simulator usage. The first experiment covers request execution order policies at the microservice instance. The second experiment compares load balancing policies across microservice instances.Comment: Bachelor's thesi

    Adaptive microservice scaling for elastic applications

    Get PDF

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table

    Characterizing Service Level Objectives for Cloud Services: Motivation of Short-Term Cache Allocation Performance Modeling

    Get PDF
    Service level objectives (SLOs) stipulate performance goals for cloud applications, microservices, and infrastructure. SLOs are widely used, in part, because system managers can tailor goals to their products, companies, and workloads. Systems research intended to support strong SLOs should target realistic performance goals used by system managers in the field. Evaluations conducted with uncommon SLO goals may not translate to real systems. Some textbooks discuss the structure of SLOs but (1) they only sketch SLO goals and (2) they use outdated examples. We mined real SLOs published on the web, extracted their goals and characterized them. Many web documents discuss SLOs loosely but few provide details and reflect real settings. Systematic literature review (SLR) prunes results and reduces bias by (1) modeling expected SLO structure and (2) detecting and removing outliers. We collected 75 SLOs where response time, query percentile and reporting period were specified. We used these SLOs to confirm and refute common perceptions. For example, we found few SLOs with response time guarantees below 10 ms for 90% or more queries. This reality bolsters perceptions that single digit SLOs face fundamental research challenges.This work was funded by NSF Grants 1749501 and 1350941.No embargoAcademic Major: Computer Science and EngineeringAcademic Major: Financ

    Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices

    Full text link
    Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth Generation (5G) mobile networks. MEC facilitates distributed cloud computing capabilities and information technology service environment for applications and services at the edges of mobile networks. This architectural modification serves to reduce congestion, latency, and improve the performance of such edge colocated applications and devices. In this paper, we demonstrate how reactive service migration can be orchestrated for low-power MEC-enabled Internet of Things (IoT) devices. Here, we use open-source Kubernetes as container orchestration system. Our demo is based on traditional client-server system from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As the use case scenario, we post-process live video received over web real-time communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1 handovers, demonstrating MEC-based software defined network (SDN). Now, edge applications may reactively follow the UE within the radio access network (RAN), expediting low-latency. The collected data is used to analyze the benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end (E2E) latency and power requirements of the UE are improved. We further discuss the challenges of implementing such schemes and future research directions therein

    ClouNS - A Cloud-native Application Reference Model for Enterprise Architects

    Full text link
    The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies
    • 

    corecore