22,164 research outputs found
Towards a Taxonomy of Performance Evaluation of Commercial Cloud Services
Cloud Computing, as one of the most promising computing paradigms, has become
increasingly accepted in industry. Numerous commercial providers have started
to supply public Cloud services, and corresponding performance evaluation is
then inevitably required for Cloud provider selection or cost-benefit analysis.
Unfortunately, inaccurate and confusing evaluation implementations can be often
seen in the context of commercial Cloud Computing, which could severely
interfere and spoil evaluation-related comprehension and communication. This
paper introduces a taxonomy to help profile and standardize the details of
performance evaluation of commercial Cloud services. Through a systematic
literature review, we constructed the taxonomy along two dimensions by
arranging the atomic elements of Cloud-related performance evaluation. As such,
this proposed taxonomy can be employed both to analyze existing evaluation
practices through decomposition into elements and to design new experiments
through composing elements for evaluating performance of commercial Cloud
services. Moreover, through smooth expansion, we can continually adapt this
taxonomy to the more general area of evaluation of Cloud Computing.Comment: 8 pages, Proceedings of the 5th International Conference on Cloud
Computing (IEEE CLOUD 2012), pp. 344-351, Honolulu, Hawaii, USA, June 24-29,
201
A Factor Framework for Experimental Design for Performance Evaluation of Commercial Cloud Services
Given the diversity of commercial Cloud services, performance evaluations of
candidate services would be crucial and beneficial for both service customers
(e.g. cost-benefit analysis) and providers (e.g. direction of service
improvement). Before an evaluation implementation, the selection of suitable
factors (also called parameters or variables) plays a prerequisite role in
designing evaluation experiments. However, there seems a lack of systematic
approaches to factor selection for Cloud services performance evaluation. In
other words, evaluators randomly and intuitively concerned experimental factors
in most of the existing evaluation studies. Based on our previous taxonomy and
modeling work, this paper proposes a factor framework for experimental design
for performance evaluation of commercial Cloud services. This framework
capsules the state-of-the-practice of performance evaluation factors that
people currently take into account in the Cloud Computing domain, and in turn
can help facilitate designing new experiments for evaluating Cloud services.Comment: 8 pages, Proceedings of the 4th International Conference on Cloud
Computing Technology and Science (CloudCom 2012), pp. 169-176, Taipei,
Taiwan, December 03-06, 201
On a Catalogue of Metrics for Evaluating Commercial Cloud Services
Given the continually increasing amount of commercial Cloud services in the
market, evaluation of different services plays a significant role in
cost-benefit analysis or decision making for choosing Cloud Computing. In
particular, employing suitable metrics is essential in evaluation
implementations. However, to the best of our knowledge, there is not any
systematic discussion about metrics for evaluating Cloud services. By using the
method of Systematic Literature Review (SLR), we have collected the de facto
metrics adopted in the existing Cloud services evaluation work. The collected
metrics were arranged following different Cloud service features to be
evaluated, which essentially constructed an evaluation metrics catalogue, as
shown in this paper. This metrics catalogue can be used to facilitate the
future practice and research in the area of Cloud services evaluation.
Moreover, considering metrics selection is a prerequisite of benchmark
selection in evaluation implementations, this work also supplements the
existing research in benchmarking the commercial Cloud services.Comment: 10 pages, Proceedings of the 13th ACM/IEEE International Conference
on Grid Computing (Grid 2012), pp. 164-173, Beijing, China, September 20-23,
201
On Evaluating Commercial Cloud Services: A Systematic Review
Background: Cloud Computing is increasingly booming in industry with many
competing providers and services. Accordingly, evaluation of commercial Cloud
services is necessary. However, the existing evaluation studies are relatively
chaotic. There exists tremendous confusion and gap between practices and theory
about Cloud services evaluation. Aim: To facilitate relieving the
aforementioned chaos, this work aims to synthesize the existing evaluation
implementations to outline the state-of-the-practice and also identify research
opportunities in Cloud services evaluation. Method: Based on a conceptual
evaluation model comprising six steps, the Systematic Literature Review (SLR)
method was employed to collect relevant evidence to investigate the Cloud
services evaluation step by step. Results: This SLR identified 82 relevant
evaluation studies. The overall data collected from these studies essentially
represent the current practical landscape of implementing Cloud services
evaluation, and in turn can be reused to facilitate future evaluation work.
Conclusions: Evaluation of commercial Cloud services has become a world-wide
research topic. Some of the findings of this SLR identify several research gaps
in the area of Cloud services evaluation (e.g., the Elasticity and Security
evaluation of commercial Cloud services could be a long-term challenge), while
some other findings suggest the trend of applying commercial Cloud services
(e.g., compared with PaaS, IaaS seems more suitable for customers and is
particularly important in industry). This SLR study itself also confirms some
previous experiences and reveals new Evidence-Based Software Engineering (EBSE)
lessons
Early Observations on Performance of Google Compute Engine for Scientific Computing
Although Cloud computing emerged for business applications in industry,
public Cloud services have been widely accepted and encouraged for scientific
computing in academia. The recently available Google Compute Engine (GCE) is
claimed to support high-performance and computationally intensive tasks, while
little evaluation studies can be found to reveal GCE's scientific capabilities.
Considering that fundamental performance benchmarking is the strategy of
early-stage evaluation of new Cloud services, we followed the Cloud Evaluation
Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon
EC2, to help understand the elementary capability of GCE for dealing with
scientific problems. The experimental results and analyses show both potential
advantages of, and possible threats to applying GCE to scientific computing.
For example, compared to Amazon's EC2 service, GCE may better suit applications
that require frequent disk operations, while it may not be ready yet for single
VM-based parallel computing. Following the same evaluation methodology,
different evaluators can replicate and/or supplement this fundamental
evaluation of GCE. Based on the fundamental evaluation results, suitable GCE
environments can be further established for case studies of solving real
science problems.Comment: Proceedings of the 5th International Conference on Cloud Computing
Technologies and Science (CloudCom 2013), pp. 1-8, Bristol, UK, December 2-5,
201
Computing server power modeling in a data center: survey,taxonomy and performance evaluation
Data centers are large scale, energy-hungry infrastructure serving the
increasing computational demands as the world is becoming more connected in
smart cities. The emergence of advanced technologies such as cloud-based
services, internet of things (IoT) and big data analytics has augmented the
growth of global data centers, leading to high energy consumption. This upsurge
in energy consumption of the data centers not only incurs the issue of surging
high cost (operational and maintenance) but also has an adverse effect on the
environment. Dynamic power management in a data center environment requires the
cognizance of the correlation between the system and hardware level performance
counters and the power consumption. Power consumption modeling exhibits this
correlation and is crucial in designing energy-efficient optimization
strategies based on resource utilization. Several works in power modeling are
proposed and used in the literature. However, these power models have been
evaluated using different benchmarking applications, power measurement
techniques and error calculation formula on different machines. In this work,
we present a taxonomy and evaluation of 24 software-based power models using a
unified environment, benchmarking applications, power measurement technique and
error formula, with the aim of achieving an objective comparison. We use
different servers architectures to assess the impact of heterogeneity on the
models' comparison. The performance analysis of these models is elaborated in
the paper
- …