1,376 research outputs found

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    ClouNS - A Cloud-native Application Reference Model for Enterprise Architects

    Full text link
    The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490

    Elasticity Measurement in CaaS Environments - Extending the Existing BUNGEE Elasticity Benchmark to AWS\u27s Elastic Container Service

    Get PDF
    Rapid elasticity and automatic scaling are core concepts of most current cloud computing systems. Elasticity describes how well and how fast cloud systems adapt to increases and decreases in workload. In parallel, software architectures are moving towards employing containerised microservices running on systems managed by container orchestration platforms. Cloud users who employ such container-based systems may want to compare the elasticity of different systems or system settings to ensure rapid elasticity and maintain service level objectives while avoiding over-provisioning. Previous research has established a variety of metrics to measure elasticity. Some existing benchmark tools are designed to measure elasticity in “Infrastructure as a Service” (IaaS) systems, but no research exists to date for measuring elasticity in systems based on containers and container orchestration. In this dissertation, an existing benchmark designed for IaaS systems, the BUNGEE benchmark developed at the University of Würzburg, was extended to be applicable to Amazon’s Elastic Container Service, a container-based cloud system. An experiment was conducted to test if the extension of the BUNGEE benchmark described in this dissertation delivers reproducible results and is therefore valid. For validation, the crucial phase of the benchmark - the system analysis phase - was run 32 times. It was established with statistical tests if the results vary by more than the acceptable level. Results indicate that there is some amount of variability, but it does not exceed the acceptable level and is consistent with the amount of performance variability encountered by other researchers in Amazon’s cloud systems. Therefore, it is concluded that the BUNGEE benchmark is likely applicable to container-based cloud systems. However, some parameters and configuration settings specific to container orchestration systems were identified that could impede reproducibility of results and should be considered in future experiments

    A Self-managed Mesos Cluster for Data Analytics with QoS Guarantees

    Full text link
    [EN] This article describes the development of an automated configuration of a software platform for Data Analytics that supports horizontal and vertical elasticity to guarantee meeting a specific deadline. It specifies all the components, software dependencies and configurations required to build up the cluster, and analyses the deployment times of different instances, as well as the horizontal and vertical elasticity. The approach followed builds up self-managed hybrid clusters that can deal with different workloads and network requirements. The article describes the structure of the recipes, points out to public repositories where the code is available and discusses the limitations of the approach as well as the results of several experiments.The work presented in this article has been partially funded by a research grant from the regional government of the Comunitat Valenciana (Spain), co-funded by the European Union ERDF funds (European Regional Development Fund) of the Comunitat Valenciana 2014-2020, with reference IDIFEDER/2018/032 (High-Performance Algorithms for the Modelling, Simulation and early Detection of diseases in Personalized Medicine). The authors would also like to thank the Spanish "Ministerio de Economia, Industria y Competitividad" for the project "BigCLOE" with reference number TIN2016-79951-R.López-Huguet, S.; Pérez-González, AM.; Calatrava Arroyo, A.; Alfonso Laguna, CD.; Caballer Fernández, M.; Moltó, G.; Blanquer Espert, I. (2019). A Self-managed Mesos Cluster for Data Analytics with QoS Guarantees. Future Generation Computer Systems. 96:449-461. https://doi.org/10.1016/j.future.2019.02.047S4494619
    • …
    corecore