428 research outputs found

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    Performance Analysis of Microservices Behavior in Cloud vs Containerized Domain based on CPU Utilization

    Get PDF
    Enterprise application development is rapidly moving towards a microservices-based approach. Microservices development makes application deployment more reliable and responsive based on their architecture and the way of deployment. Still, the performance of microservices is different in all environments based on resources provided by the respective cloud and services provided in the backend such as auto-scaling, load balancer, and multiple monitoring parameters. So, it is strenuous to identify Scaling and monitoring of microservice-based applications are quick as compared to monolithic applications [1]. In this paper, we deployed microservice applications in cloud and containerized environments to analyze their CPU utilization over multiple network input requests. Monolithic applications are tightly coupled while microservices applications are loosely coupled which help the API gateway to easily interact with each service module. With reference to monitoring parameters, CPU utilization is 23 percent in cloud environment. Additionally, we deployed the equivalent microservice in a containerized environment with extended resources to minimize CPU utilization to 17 percent. Furthermore, we have shown the performance of the application with “Network IN” and “Network Out” requests

    ClouNS - A Cloud-native Application Reference Model for Enterprise Architects

    Full text link
    The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies

    ANALYZING THE SYSTEM FEATURES, USABILITY, AND PERFORMANCE OF A CONTAINERIZED APPLICATION ON CLOUD COMPUTING SYSTEMS

    Get PDF
    This study analyzed the system features, usability, and performance of three serverless cloud computing platforms: Google Cloud’s Cloud Run, Amazon Web Service’s App Runner, and Microsoft Azure’s Container Apps. The analysis was conducted on a containerized mobile application designed to track real-time bus locations for San Antonio public buses on specific routes and provide estimated arrival times for selected bus stops. The study evaluated various system-related features, including service configuration, pricing, and memory & CPU capacity, along with performance metrics such as container latency, Distance Matrix API response time, and CPU utilization for each service. Easy-to-use usability was also evaluated by assessing the quality of documentation, a learning curve for be- ginner users, and a scale-to-zero factor. The results of the analysis revealed that Google’s Cloud Run demonstrated better performance and usability when com- pared to AWS’s App Runner and Microsoft Azure’s Container Apps. Cloud Run exhibited lower latency and faster response time for distance matrix queries. These findings provide valuable insights for selecting an appropriate serverless cloud ser- vice for similar containerized web applications

    Adaptive microservice scaling for elastic applications

    Get PDF

    Amazon Web Services (AWS) Cloud Platform for Satellite Data Processing

    Get PDF
    As part of NOAA’s Environmental Satellite Processing and Distribution System (ESPDS) program, Solers created a cloud platform for satellite data management and processing. It consists of Enterprise Data Management (EDM) and Enterprise Product Generation (EPG) services, hosted in an Amazon Web Services (AWS) cloud environment, leveraging AWS cloud services and existing NOAA product generation algorithms. While this cloud platform was developed in the context of NOAA/NESDIS satellite data management and processing requirements, it also has tremendous applicability and cost effectiveness for small satellite data management and processing needs. An attractive method for ingesting data from small satellites is the AWS Ground Station. This can help small satellite operators save on costs of real estate, hardware/software, and labor to deploy and operate their own ground stations. The data is ingested via AWS-managed antennas, and made available for further processing in the AWS cloud using COTS RF/ baseband over IP transport services. Once this data has been ingested and made available, the flexible REST APIs from the EDM and EPG services in the AWS cloud make it easy and cost-effective for small satellite operators to catalog and process the data into consumable products, and make them available for access to end users

    Burst-aware predictive autoscaling for containerized microservices

    Get PDF
    Autoscaling methods are used for cloud-hosted applications to dynamically scale the allocated resources for guaranteeing Quality-of-Service (QoS). The public-facing application serves dynamic workloads, which contain bursts and pose challenges for autoscaling methods to ensure application performance. Existing State-of-the-art autoscaling methods are burst-oblivious to determine and provision the appropriate resources. For dynamic workloads, it is hard to detect and handle bursts online for maintaining application performance. In this article, we propose a novel burst-aware autoscaling method which detects burst in dynamic workloads using workload forecasting, resource prediction, and scaling decision making while minimizing response time service-level objectives (SLO) violations. We evaluated our approach through a trace-driven simulation, using multiple synthetic and realistic bursty workloads for containerized microservices, improving performance when comparing against existing state-of-the-art autoscaling methods. Such experiments show an increase of Ă— 1.09 in total processed requests, a reduction of Ă— 5.17 for SLO violations, and an increase of Ă— 0.767 cost as compared to the baseline method.This work was partially supported by the European Research Council (ERC) under the EU Horizon 2020 programme (GA 639595), the Spanish Ministry of Economy, Industry and Competitiveness (TIN2015-65316-P and IJCI2016-27485) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft
    • …
    corecore