4,379 research outputs found

    Elasticity Measurement in CaaS Environments - Extending the Existing BUNGEE Elasticity Benchmark to AWS\u27s Elastic Container Service

    Get PDF
    Rapid elasticity and automatic scaling are core concepts of most current cloud computing systems. Elasticity describes how well and how fast cloud systems adapt to increases and decreases in workload. In parallel, software architectures are moving towards employing containerised microservices running on systems managed by container orchestration platforms. Cloud users who employ such container-based systems may want to compare the elasticity of different systems or system settings to ensure rapid elasticity and maintain service level objectives while avoiding over-provisioning. Previous research has established a variety of metrics to measure elasticity. Some existing benchmark tools are designed to measure elasticity in “Infrastructure as a Service” (IaaS) systems, but no research exists to date for measuring elasticity in systems based on containers and container orchestration. In this dissertation, an existing benchmark designed for IaaS systems, the BUNGEE benchmark developed at the University of Würzburg, was extended to be applicable to Amazon’s Elastic Container Service, a container-based cloud system. An experiment was conducted to test if the extension of the BUNGEE benchmark described in this dissertation delivers reproducible results and is therefore valid. For validation, the crucial phase of the benchmark - the system analysis phase - was run 32 times. It was established with statistical tests if the results vary by more than the acceptable level. Results indicate that there is some amount of variability, but it does not exceed the acceptable level and is consistent with the amount of performance variability encountered by other researchers in Amazon’s cloud systems. Therefore, it is concluded that the BUNGEE benchmark is likely applicable to container-based cloud systems. However, some parameters and configuration settings specific to container orchestration systems were identified that could impede reproducibility of results and should be considered in future experiments

    CyberGuarder: a virtualization security assurance architecture for green cloud computing

    Get PDF
    Cloud Computing, Green Computing, Virtualization, Virtual Security Appliance, Security Isolation

    Performance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy, Benchmarks and Standards for Future Research

    Get PDF
    Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog and Edge paradigms. Not only these emerging paradigms demand reevaluating cloud-native optimizations and exploring Fog and Edge-based solutions, but also the objectives require significant shift from considering only latency to energy, security, reliability and cost. Hence, it is apparent that optimization objectives have become diverse and lately Internet of Things (IoT)-specific born objectives must come into play. This is critical as incorrect selection of metrics can mislead the developer about the real performance. For instance, a latency-aware auto-scaler must be evaluated through latency-related metrics as response time or tail latency; otherwise the resource manager is not carefully evaluated even if it can reduce the cost. Given such challenges, researchers and developers are struggling to explore and utilize the right metrics to evaluate the performance of optimization techniques such as task scheduling, resource provisioning, resource allocation, resource scheduling and resource execution. This is challenging due to (1) novel and multi-layered computing paradigm, e.g., Cloud, Fog and Edge, (2) IoT applications with different requirements, e.g., latency or privacy, and (3) not having a benchmark and standard for the evaluation metrics. In this paper, by exploring the literature, (1) we present a taxonomy of the various real-world metrics to evaluate the performance of cloud, fog, and edge computing; (2) we survey the literature to recognize common metrics and their applications; and (3) outline open issues for future research. This comprehensive benchmark study can significantly assist developers and researchers to evaluate performance under realistic metrics and standards to ensure their objectives will be achieved in the production environments

    Proactive VNF Provisioning with Multi-timescale Cloud Resources: Fusing Online Learning and Online Optimization

    Get PDF
    postprin
    • …
    corecore