609 research outputs found

    ClouNS - A Cloud-native Application Reference Model for Enterprise Architects

    Full text link
    The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies

    Scalability Benchmarking of Cloud-Native Applications Applied to Event-Driven Microservices

    Get PDF
    Cloud-native applications constitute a recent trend for designing large-scale software systems. This thesis introduces the Theodolite benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, their frameworks, configurations, and deployments. The benchmarking method is applied to event-driven microservices, a specific type of cloud-native applications that employ distributed stream processing frameworks to scale with massive data volumes. Extensive experimental evaluations benchmark and compare the scalability of various stream processing frameworks under different configurations and deployments, including different public and private cloud environments. These experiments show that the presented benchmarking method provides statistically sound results in an adequate amount of time. In addition, three case studies demonstrate that the Theodolite benchmarking method can be applied to a wide range of applications beyond stream processing

    Self-managing cloud-native applications : design, implementation and experience

    Get PDF
    Running applications in the cloud efficiently requires much more than deploying software in virtual machines. Cloud applications have to be continuously managed: (1) to adjust their resources to the incoming load and (2) to face transient failures replicating and restarting components to provide resiliency on unreliable infrastructure. Continuous management monitors application and infrastructural metrics to provide automated and responsive reactions to failures (health management) and changing environmental conditions (auto-scaling) minimizing human intervention. In the current practice, management functionalities are provided as infrastructural or third party services. In both cases they are external to the application deployment. We claim that this approach has intrinsic limits, namely that separating management functionalities from the application prevents them from naturally scaling with the application and requires additional management code and human intervention. Moreover, using infrastructure provider services for management functionalities results in vendor lock-in effectively preventing cloud applications to adapt and run on the most effective cloud for the job. In this paper we discuss the main characteristics of cloud native applications, propose a novel architecture that enables scalable and resilient self-managing applications in the cloud, and relate on our experience in porting a legacy application to the cloud applying cloud-native principles

    CloudEval-YAML: A Practical Benchmark for Cloud Configuration Generation

    Full text link
    Among the thriving ecosystem of cloud computing and the proliferation of Large Language Model (LLM)-based code generation tools, there is a lack of benchmarking for code generation in cloud-native applications. In response to this need, we present CloudEval-YAML, a practical benchmark for cloud configuration generation. CloudEval-YAML tackles the diversity challenge by focusing on YAML, the de facto standard of numerous cloud-native tools. We develop the CloudEval-YAML benchmark with practicality in mind: the dataset consists of hand-written problems with unit tests targeting practical scenarios. We further enhanced the dataset to meet practical needs by rephrasing questions in a concise, abbreviated, and bilingual manner. The dataset consists of 1011 problems that take more than 1200 human hours to complete. To improve practicality during evaluation, we build a scalable evaluation platform for CloudEval-YAML that achieves a 20 times speedup over a single machine. To the best of our knowledge, the CloudEval-YAML dataset is the first hand-written dataset targeting cloud-native applications. We present an in-depth evaluation of 12 LLMs, leading to a deeper understanding of the problems and LLMs, as well as effective methods to improve task performance and reduce cost

    Machine Learning Interference Modelling for Cloud-native Applications

    Get PDF
    Modern cloud-native applications use microservice architecture patterns, where fine granular software components are deployed in lightweight containers that run inside cloud virtual machines. To utilize resources more efficiently, containers belonging to different applications are often co-located on the same virtual machine. Co-location can result in software performance degradation due to interference among components competing for resources. In this thesis, we propose techniques to detect and model performance interference. To detect interference at runtime, we train Machine Learning (ML) models prior to deployment using interfering benchmarks and show that the model can be generalized to detect runtime interference from different types of applications. Experimental results in public clouds show that our approach outperforms existing interference detection techniques by 1.35%-66.69%. To quantify the intereference impact, we further propose a ML interference quantification technique. The technique constructs ML models for response time prediction and can dynamically account for changing runtime conditions through the use of a sliding window method. Our technique outperforms baseline and competing techniques by 1.45%-92.04%. These contributions can be beneficial to software architects and software operators when designing, deploying, and operating cloud-native applications

    Model-based Resource Management for Fine-grained Services

    Get PDF
    Brief Biography: Alim Ul Gias is currently a Research Associate at the Centre for Parallel Computing (CPC), University of Westminster. He completed his PhD from Imperial College London in 2022. Before starting his PhD, Alim was a lecturer at Institute of Information Technology (IIT), University of Dhaka (DU). He completed his bachelor's and master's program from the same institute. His current research focuses on different Quality of Service (QoS) aspects of cloud-native applications e.g., microservices. In particular, he aims to address the performance and resource management challenges concenrining the microservices architecture
    • …
    corecore