948 research outputs found

    Empirical Evaluation of Cloud IAAS Platforms using System-level Benchmarks

    Get PDF
    Cloud Computing is an emerging paradigm in the field of computing where scalable IT enabled capabilities are delivered ‘as-a-service’ using Internet technology. The Cloud industry adopted three basic types of computing service models based on software level abstraction: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Infrastructure-as-a-Service allows customers to outsource fundamental computing resources such as servers, networking, storage, as well as services where the provider owns and manages the entire infrastructure. This allows customers to only pay for the resources they consume. In a fast-growing IaaS market with multiple cloud platforms offering IaaS services, the user\u27s decision on the selection of the best IaaS platform is quite challenging. Therefore, it is very important for organizations to evaluate and compare the performance of different IaaS cloud platforms in order to minimize cost and maximize performance. Using a vendor-neutral approach, this research focused on four of the top IaaS cloud platforms- Amazon EC2, Microsoft Azure, Google Compute Engine, and Rackspace cloud services. This research compared the performance of IaaS cloud platforms using system-level parameters including server, file I/O, and network. System-level benchmarking provides an objective comparison of the IaaS cloud platforms from performance perspective. Unixbench, Dbench, and Iperf are the system-level benchmarks chosen to test the performance of the server, file I/O, and network respectively. In order to capture the performance variability, the benchmark tests were performed at different time periods on weekdays and weekends. Each IaaS platform\u27s performance was also tested using various parameters. The benchmark tests conducted on different virtual machine (VM) configurations should help cloud users select the best IaaS platform for their needs. Also, based on their applications\u27 requirements, cloud users should get a clearer picture of which VM configuration they should choose. In addition to the performance evaluation, the price-per-performance value of all the IaaS cloud platforms was also examined

    The state of SQL-on-Hadoop in the cloud

    Get PDF
    Managed Hadoop in the cloud, especially SQL-on-Hadoop, has been gaining attention recently. On Platform-as-a-Service (PaaS), analytical services like Hive and Spark come preconfigured for general-purpose and ready to use. Thus, giving companies a quick entry and on-demand deployment of ready SQL-like solutions for their big data needs. This study evaluates cloud services from an end-user perspective, comparing providers including: Microsoft Azure, Amazon Web Services, Google Cloud, and Rackspace. The study focuses on performance, readiness, scalability, and cost-effectiveness of the different solutions at entry/test level clusters sizes. Results are based on over 15,000 Hive queries derived from the industry standard TPC-H benchmark. The study is framed within the ALOJA research project, which features an open source benchmarking and analysis platform that has been recently extended to support SQL-on-Hadoop engines. The ALOJA Project aims to lower the total cost of ownership (TCO) of big data deployments and study their performance characteristics for optimization. The study benchmarks cloud providers across a diverse range instance types, and uses input data scales from 1GB to 1TB, in order to survey the popular entry-level PaaS SQL-on-Hadoop solutions, thereby establishing a common results-base upon which subsequent research can be carried out by the project. Initial results already show the main performance trends to both hardware and software configuration, pricing, similarities and architectural differences of the evaluated PaaS solutions. Whereas some providers focus on decoupling storage and computing resources while offering network-based elastic storage, others choose to keep the local processing model from Hadoop for high performance, but reducing flexibility. Results also show the importance of application-level tuning and how keeping up-to-date hardware and software stacks can influence performance even more than replicating the on-premises model in the cloud.This work is partially supported by the Microsoft Azure for Research program, the European Research Council (ERC) under the EUs Horizon 2020 programme (GA 639595), the Spanish Ministry of Education (TIN2015-65316-P), and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Performance evaluation of a distributed storage service in community network clouds

    Get PDF
    Community networks are self-organized and decentralized communication networks built and operated by citizens, for citizens. The consolidation of today's cloud technologies offers now, for community networks, the possibility to collectively develop community clouds, building upon user-provided networks and extending toward cloud services. Cloud storage, and in particular secure and reliable cloud storage, could become a key community cloud service to enable end-user applications. In this paper, we evaluate in a real deployment the performance of Tahoe least-authority file system (Tahoe-LAFS), a decentralized storage system with provider-independent security that guarantees privacy to the users. We evaluate how the Tahoe-LAFS storage system performs when it is deployed over distributed community cloud nodes in a real community network such as Guifi.net. Furthermore, we evaluate Tahoe-LAFS in the Microsoft Azure commercial cloud platform, to compare and understand the impact of homogeneous network and hardware resources on the performance of the Tahoe-LAFS. We observed that the write operation of Tahoe-LAFS resulted in similar performance when using either the community network cloud or the commercial cloud. However, the read operation achieved better performance in the Azure cloud, where the reading from multiple nodes of Tahoe-LAFS benefited from the homogeneity of the network and nodes. Our results suggest that Tahoe-LAFS can run on community network clouds with suitable performance for the needed end-user experience.Peer ReviewedPreprin

    Achieving Reproducibility in Cloud Benchmarking: A Focus on FaaS Services

    Get PDF
    openThe cloud computing industry has witnessed a rapid growth in recent years, providing businesses with an opportunity to scale their operations dynamically. With the emergence of multiple cloud providers, it has become increasingly challenging to determine which provider offers the most scalable services for a particular workload. This master thesis aims to compare the scalability of three major cloud providers: Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. The study focuses on benchmarking the scalability of their compute, storage, and database services. To achieve this, a set of well-defined benchmarks will be used to evaluate the performance of each provider. The benchmarks will be designed to simulate a range of workloads, from small to large-scale, to assess how each provider's services perform when under different load conditions. The results will be analyzed and compared to identify the strengths and weaknesses of each provider's services. This study will provide valuable insights into which cloud provider offers the most scalable services, and will help businesses make informed decisions when choosing a cloud provider for their specific needs. The findings of this study will contribute to the ongoing discussion on the performance of cloud services, and will offer guidance to businesses on selecting the most appropriate cloud provider to meet their scalability requirements.The cloud computing industry has witnessed a rapid growth in recent years, providing businesses with an opportunity to scale their operations dynamically. With the emergence of multiple cloud providers, it has become increasingly challenging to determine which provider offers the most scalable services for a particular workload. This master thesis aims to compare the scalability of three major cloud providers: Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. The study focuses on benchmarking the scalability of their compute, storage, and database services. To achieve this, a set of well-defined benchmarks will be used to evaluate the performance of each provider. The benchmarks will be designed to simulate a range of workloads, from small to large-scale, to assess how each provider's services perform when under different load conditions. The results will be analyzed and compared to identify the strengths and weaknesses of each provider's services. This study will provide valuable insights into which cloud provider offers the most scalable services, and will help businesses make informed decisions when choosing a cloud provider for their specific needs. The findings of this study will contribute to the ongoing discussion on the performance of cloud services, and will offer guidance to businesses on selecting the most appropriate cloud provider to meet their scalability requirements

    Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Architectures

    Get PDF
    Cloud controllers support the operation and quality management of dynamic cloud architectures by automatically scaling the compute resources to meet performance guarantees and minimize resource costs. Existing cloud controllers often resort to scaling strategies that are codified as a set of architecture adaptation rules. However, for a cloud provider, deployed application architectures are black-boxes, making it difficult at design time to define optimal or pre-emptive adaptation rules. Thus, the burden of taking adaptation decisions often is delegated to the cloud application. We propose the dynamic learning of adaptation rules for deployed application architectures in the cloud. We introduce FQL4KE, a self-learning fuzzy controller that learns and modifies fuzzy rules at runtime. The benefit is that we do not have to rely solely on precise design-time knowledge, which may be difficult to acquire. FQL4KE empowers users to configure cloud controllers by simply adjusting weights representing priorities for architecture quality instead of defining complex rules. FQL4KE has been experimentally validated using the cloud application framework ElasticBench in Azure and OpenStack. The experimental results demonstrate that FQL4KE outperforms both a fuzzy controller without learning and the native Azure auto-scalin

    Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges

    Get PDF
    In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks

    On a Catalogue of Metrics for Evaluating Commercial Cloud Services

    Full text link
    Given the continually increasing amount of commercial Cloud services in the market, evaluation of different services plays a significant role in cost-benefit analysis or decision making for choosing Cloud Computing. In particular, employing suitable metrics is essential in evaluation implementations. However, to the best of our knowledge, there is not any systematic discussion about metrics for evaluating Cloud services. By using the method of Systematic Literature Review (SLR), we have collected the de facto metrics adopted in the existing Cloud services evaluation work. The collected metrics were arranged following different Cloud service features to be evaluated, which essentially constructed an evaluation metrics catalogue, as shown in this paper. This metrics catalogue can be used to facilitate the future practice and research in the area of Cloud services evaluation. Moreover, considering metrics selection is a prerequisite of benchmark selection in evaluation implementations, this work also supplements the existing research in benchmarking the commercial Cloud services.Comment: 10 pages, Proceedings of the 13th ACM/IEEE International Conference on Grid Computing (Grid 2012), pp. 164-173, Beijing, China, September 20-23, 201
    • 

    corecore