1,254 research outputs found
Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing
High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms.
We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them
Early Observations on Performance of Google Compute Engine for Scientific Computing
Although Cloud computing emerged for business applications in industry,
public Cloud services have been widely accepted and encouraged for scientific
computing in academia. The recently available Google Compute Engine (GCE) is
claimed to support high-performance and computationally intensive tasks, while
little evaluation studies can be found to reveal GCE's scientific capabilities.
Considering that fundamental performance benchmarking is the strategy of
early-stage evaluation of new Cloud services, we followed the Cloud Evaluation
Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon
EC2, to help understand the elementary capability of GCE for dealing with
scientific problems. The experimental results and analyses show both potential
advantages of, and possible threats to applying GCE to scientific computing.
For example, compared to Amazon's EC2 service, GCE may better suit applications
that require frequent disk operations, while it may not be ready yet for single
VM-based parallel computing. Following the same evaluation methodology,
different evaluators can replicate and/or supplement this fundamental
evaluation of GCE. Based on the fundamental evaluation results, suitable GCE
environments can be further established for case studies of solving real
science problems.Comment: Proceedings of the 5th International Conference on Cloud Computing
Technologies and Science (CloudCom 2013), pp. 1-8, Bristol, UK, December 2-5,
201
On a Catalogue of Metrics for Evaluating Commercial Cloud Services
Given the continually increasing amount of commercial Cloud services in the
market, evaluation of different services plays a significant role in
cost-benefit analysis or decision making for choosing Cloud Computing. In
particular, employing suitable metrics is essential in evaluation
implementations. However, to the best of our knowledge, there is not any
systematic discussion about metrics for evaluating Cloud services. By using the
method of Systematic Literature Review (SLR), we have collected the de facto
metrics adopted in the existing Cloud services evaluation work. The collected
metrics were arranged following different Cloud service features to be
evaluated, which essentially constructed an evaluation metrics catalogue, as
shown in this paper. This metrics catalogue can be used to facilitate the
future practice and research in the area of Cloud services evaluation.
Moreover, considering metrics selection is a prerequisite of benchmark
selection in evaluation implementations, this work also supplements the
existing research in benchmarking the commercial Cloud services.Comment: 10 pages, Proceedings of the 13th ACM/IEEE International Conference
on Grid Computing (Grid 2012), pp. 164-173, Beijing, China, September 20-23,
201
On Evaluating Commercial Cloud Services: A Systematic Review
Background: Cloud Computing is increasingly booming in industry with many
competing providers and services. Accordingly, evaluation of commercial Cloud
services is necessary. However, the existing evaluation studies are relatively
chaotic. There exists tremendous confusion and gap between practices and theory
about Cloud services evaluation. Aim: To facilitate relieving the
aforementioned chaos, this work aims to synthesize the existing evaluation
implementations to outline the state-of-the-practice and also identify research
opportunities in Cloud services evaluation. Method: Based on a conceptual
evaluation model comprising six steps, the Systematic Literature Review (SLR)
method was employed to collect relevant evidence to investigate the Cloud
services evaluation step by step. Results: This SLR identified 82 relevant
evaluation studies. The overall data collected from these studies essentially
represent the current practical landscape of implementing Cloud services
evaluation, and in turn can be reused to facilitate future evaluation work.
Conclusions: Evaluation of commercial Cloud services has become a world-wide
research topic. Some of the findings of this SLR identify several research gaps
in the area of Cloud services evaluation (e.g., the Elasticity and Security
evaluation of commercial Cloud services could be a long-term challenge), while
some other findings suggest the trend of applying commercial Cloud services
(e.g., compared with PaaS, IaaS seems more suitable for customers and is
particularly important in industry). This SLR study itself also confirms some
previous experiences and reveals new Evidence-Based Software Engineering (EBSE)
lessons
MongoDB Performance In The Cloud
Web applications are growing at a staggering rate every day. As web applications keep getting more complex, their data storage requirements tend to grow exponentially. Databases play an important role in the way web applications store their information. Mongodb is a document store database that does not have strict schemas that RDBMs require and can grow horizontally without performance degradation. MongoDB brings possibilities for different storage scenarios and allow the programmers to use the database as a storage that fits their needs, not the other way around. Scaling MongoDB horizontally requires tens to hundreds of servers, making it very difficult to afford this kind of setup on dedicated hardware. By moving the database into the cloud, this opens up a possibility for low cost virtual machine instances at reasonable prices. There are many cloud services to choose from and without testing performance on each one, there is very little information out there. This paper provides benchmarks on the performance of MongoDB in the cloud
Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking
To optimally deploy their applications, users of Infrastructure-as-a-Service
clouds are required to evaluate the costs and performance of different
combinations of cloud configurations to find out which combination provides the
best service level for their specific application. Unfortunately, benchmarking
cloud services is cumbersome and error-prone. In this paper, we propose an
architecture and concrete implementation of a cloud benchmarking Web service,
which fosters the definition of reusable and representative benchmarks. In
distinction to existing work, our system is based on the notion of
Infrastructure-as-Code, which is a state of the art concept to define IT
infrastructure in a reproducible, well-defined, and testable way. We
demonstrate our system based on an illustrative case study, in which we measure
and compare the disk IO speeds of different instance and storage types in
Amazon EC2
Cloud Benchmarking for Performance
How can applications be deployed on the cloud to achieve maximum performance?
This question has become significant and challenging with the availability of a
wide variety of Virtual Machines (VMs) with different performance capabilities
in the cloud. The above question is addressed by proposing a six step
benchmarking methodology in which a user provides a set of four weights that
indicate how important each of the following groups: memory, processor,
computation and storage are to the application that needs to be executed on the
cloud. The weights along with cloud benchmarking data are used to generate a
ranking of VMs that can maximise performance of the application. The rankings
are validated through an empirical analysis using two case study applications;
the first is a financial risk application and the second is a molecular
dynamics simulation, which are both representative of workloads that can
benefit from execution on the cloud. Both case studies validate the feasibility
of the methodology and highlight that maximum performance can be achieved on
the cloud by selecting the top ranked VMs produced by the methodology.Comment: 6 pages, 6th IEEE International Conference on Cloud Computing
Technology and Science (IEEE CloudCom) 2014, Singapor
THE FEASIBILITY STUDY OF RUNNING HPC WORKLOADS ON COMPUTATIONAL CLOUDS
High-performance computing (HPC) applications require high-end computing systems, but not all scientists have access to such powerful systems. Cloud computing provides an opportunity to run these applications on the cloud without the requirement of investing in high-end parallel computing systems. We can analyze the performance of the HPC applications on private as well as public clouds. The performance of the workload on the cloud can be calculated using different benchmarking tools such as NAS parallel benchmarking and Rally. The workloads of HPC applications require use of many parallel computing systems to be run on a physical setup, but this facility is available on cloud computing environment without the need of investing in physical machines. We aim to analyze the ability of the cloud to perform well when running HPC workloads. We shall get the detailed performance of the cloud when running these applications on a private cloud and find the pros and cons of running HPC workloads on cloud environment
- …