354 research outputs found

    ARM Wrestling with Big Data: A Study of Commodity ARM64 Server for Big Data Workloads

    Full text link
    ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to 20GB20GB files, 5M5M web pages and 500M500M tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 13rd\frac{1}{3}^{rd} the energy, and has an Energy Delay Product (EDP) that is 5071%50-71\% lower than the x64 server. These results hold promise for ARM64 data centers hosting Big Data workloads to reduce their operational costs, while opening up opportunities for further analysis.Comment: Accepted for publication in the Proceedings of the 24th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC), 201

    Elasticity Measurement in CaaS Environments - Extending the Existing BUNGEE Elasticity Benchmark to AWS\u27s Elastic Container Service

    Get PDF
    Rapid elasticity and automatic scaling are core concepts of most current cloud computing systems. Elasticity describes how well and how fast cloud systems adapt to increases and decreases in workload. In parallel, software architectures are moving towards employing containerised microservices running on systems managed by container orchestration platforms. Cloud users who employ such container-based systems may want to compare the elasticity of different systems or system settings to ensure rapid elasticity and maintain service level objectives while avoiding over-provisioning. Previous research has established a variety of metrics to measure elasticity. Some existing benchmark tools are designed to measure elasticity in “Infrastructure as a Service” (IaaS) systems, but no research exists to date for measuring elasticity in systems based on containers and container orchestration. In this dissertation, an existing benchmark designed for IaaS systems, the BUNGEE benchmark developed at the University of Würzburg, was extended to be applicable to Amazon’s Elastic Container Service, a container-based cloud system. An experiment was conducted to test if the extension of the BUNGEE benchmark described in this dissertation delivers reproducible results and is therefore valid. For validation, the crucial phase of the benchmark - the system analysis phase - was run 32 times. It was established with statistical tests if the results vary by more than the acceptable level. Results indicate that there is some amount of variability, but it does not exceed the acceptable level and is consistent with the amount of performance variability encountered by other researchers in Amazon’s cloud systems. Therefore, it is concluded that the BUNGEE benchmark is likely applicable to container-based cloud systems. However, some parameters and configuration settings specific to container orchestration systems were identified that could impede reproducibility of results and should be considered in future experiments

    Cross-layer multi-cloud real-time application QoS monitoring and benchmarking as-a-service framework

    Full text link
    Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business criti-cal applications that leverage various cloud platforms. Such applications hosted on sin-gle/multiple cloud platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). The process of monitoring and benchmarking cloud applications is as yet a criti-cal issue to be further studied and addressed. Current monitoring and benchmarking approaches do not provide a holistic view of per-formance QoS for distributed applications cross cloud layers on multi-cloud environments. Furthermore, current monitoring frameworks are limited to monitoring tasks and do not in-corporate benchmarking abilities. In other words, there is no unified framework that com-bines monitoring and benchmarking functionalities. To gain the ability of both monitoring and benchmarking all under one framework will empower the cloud user to gain more in-depth control and awareness of cloud services. The Thesis identifies and discusses the major research dimensions and design issues relat-ed to developing techniques that can monitor and benchmark an application’s components cross-layers on multi-clouds. Furthermore, the thesis discusses to what extent such research dimensions and design issues are handled by current academic research papers as well as by the existing commercial monitoring tools. Moreover, the Thesis addresses an important research challenge of how to undertake cross-layer cloud monitoring and benchmarking in multi-cloud environments to provide es-sential information for effective cloud applications QoS management. It proposes, develops, implements and validates CLAMBS: Cross-Layer Multi-Cloud Application Monitoring and Benchmarking, as-a-Service Framework. The core contributions made by this thesis are the development of the CLAMBS framework and underlying monitoring and benchmarking tech-niques which are capable of: i) performing QoS monitoring of application components (e.g. ii database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g. Amazon EC2, and Microsoft Azure); and ii) giving visibility into the QoS of in-dividual application components, which is not supported by current monitoring and bench-marking frameworks. Experiments are conducted on real-world multi-cloud platforms to em-pirically evaluate the framework and the results validate that CLAMBS can effectively monitor and benchmark applications running cross-layers on multi-clouds. The thesis presents implementation and evaluation details of the proposed CLAMBS framework. It demonstrates the feasibility and scalability of the proposed framework in real-world environments by implementing a proof-of-concept prototype on multi-cloud platforms. Finally, it presents a model for analysing the communication overheads introduced by various components (e.g. agents and manager) of CLAMBS in multi cloud environments

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201

    Applying hybrid cloud systems to solve challenges posed by the big data problem

    Get PDF
    The problem of Big Data poses challenges to traditional compute systems used for Machine Learning (ML) techniques that extract, analyze and visualize important information. New and creative solutions for processing data must be explored in order to overcome hurdles imposed by Big Data as the amount of data generation grows. These solutions include introducing hybrid cloud systems to aid in the storage and processing of data. However, this introduces additional problems relating to data security as data travels outside localized systems to rely on public storage and processing resources. Current research has relied primarily on data classification as a mechanism to address security concerns of data traversing external resources. This technique can be limited as it assumes data is accurately classified and that an appropriate amount of data is cleared for external use. Leveraging a flexible key store for data encryption can help overcome these possible limitations by treating all data the same and mitigating risk depending on the public provider. This is shown by introducing a Data Key Store (DKS) and public cloud storage offering into a Big Data analytics network topology. Finding show that introducing the Data Key Store into a Big Data analytics network topology successfully allows the topology to be extended to handle the large amounts of data associated with Big Data while preserving appropriate data security. Introducing a public cloud storage solution also provides additional benefits to the Big Data network topology by introducing intentional time delay into data processing, efficient use of system resources when data ebbs occur and extending traditional data storage resiliency techniques to Big Data storage

    Performance Evaluation of Serverless Applications and Infrastructures

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern web-based software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new serverless services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics. Measuring these characteristics is difficult in dynamic cloud environments due to performance variability in large-scale distributed systems with limited observability.Objective. This thesis aims to enable reproducible performance evaluation of serverless applications and their underlying cloud infrastructure.Method. A combination of literature review and empirical research established a consolidated view on serverless applications and their performance. New solutions were developed through engineering research and used to conduct performance benchmarking field experiments in cloud environments.Findings. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that most studies do not follow reproducibility principles on cloud experimentation. Characterizing 89 serverless applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. A novel trace-based serverless application benchmark shows that external service calls often dominate the median end-to-end latency and cause long tail latency. The latency breakdown analysis further identifies performance challenges of serverless applications, such as long delays through asynchronous function triggers, substantial runtime initialization for coldstarts, increased performance variability under bursty workloads, and heavily provider-dependent performance characteristics. The evaluation of different cloud benchmarking methodologies has shown that only selected micro-benchmarks are suitable for estimating application performance, performance variability depends on the resource type, and batch testing on the same instance with repetitions should be used for reliable performance testing.Conclusions. The insights of this thesis can guide practitioners in building performance-optimized serverless applications and researchers in reproducibly evaluating cloud performance using suitable execution methodologies and different benchmark types
    corecore