18,705 research outputs found

    Measuring the Scalability of Cloud-based Software Services

    Get PDF
    Measuring and testing the performance of cloud-based software services is critically important in the context of rapid growth of cloud computing. Scalability, elasticity and efficiency are interrelated aspects of performance of cloud-based software services. Here we present a work that is focused on measuring the scalability of cloud-based software services in technical terms. We introduce technical scalability metrics inspired by earlier technical metrics of elasticity

    Cloud based testing of business applications and web services

    Get PDF
    This paper deals with testing of applications based on the principles of cloud computing. It is aimed to describe options of testing business software in clouds (cloud testing). It identifies the needs for cloud testing tools including multi-layer testing; service level agreement (SLA) based testing, large scale simulation, and on-demand test environment. In a cloud-based model, ICT services are distributed and accessed over networks such as intranet or internet, which offer large data centers deliver on demand, resources as a service, eliminating the need for investments in specific hardware, software, or on data center infrastructure. Businesses can apply those new technologies in the contest of intellectual capital management to lower the cost and increase competitiveness and also earnings. Based on comparison of the testing tools and techniques, the paper further investigates future trend of cloud based testing tools research and development. It is also important to say that this comparison and classification of testing tools describes a new area and it has not yet been done

    On a Catalogue of Metrics for Evaluating Commercial Cloud Services

    Full text link
    Given the continually increasing amount of commercial Cloud services in the market, evaluation of different services plays a significant role in cost-benefit analysis or decision making for choosing Cloud Computing. In particular, employing suitable metrics is essential in evaluation implementations. However, to the best of our knowledge, there is not any systematic discussion about metrics for evaluating Cloud services. By using the method of Systematic Literature Review (SLR), we have collected the de facto metrics adopted in the existing Cloud services evaluation work. The collected metrics were arranged following different Cloud service features to be evaluated, which essentially constructed an evaluation metrics catalogue, as shown in this paper. This metrics catalogue can be used to facilitate the future practice and research in the area of Cloud services evaluation. Moreover, considering metrics selection is a prerequisite of benchmark selection in evaluation implementations, this work also supplements the existing research in benchmarking the commercial Cloud services.Comment: 10 pages, Proceedings of the 13th ACM/IEEE International Conference on Grid Computing (Grid 2012), pp. 164-173, Beijing, China, September 20-23, 201

    Towards a Taxonomy of Performance Evaluation of Commercial Cloud Services

    Full text link
    Cloud Computing, as one of the most promising computing paradigms, has become increasingly accepted in industry. Numerous commercial providers have started to supply public Cloud services, and corresponding performance evaluation is then inevitably required for Cloud provider selection or cost-benefit analysis. Unfortunately, inaccurate and confusing evaluation implementations can be often seen in the context of commercial Cloud Computing, which could severely interfere and spoil evaluation-related comprehension and communication. This paper introduces a taxonomy to help profile and standardize the details of performance evaluation of commercial Cloud services. Through a systematic literature review, we constructed the taxonomy along two dimensions by arranging the atomic elements of Cloud-related performance evaluation. As such, this proposed taxonomy can be employed both to analyze existing evaluation practices through decomposition into elements and to design new experiments through composing elements for evaluating performance of commercial Cloud services. Moreover, through smooth expansion, we can continually adapt this taxonomy to the more general area of evaluation of Cloud Computing.Comment: 8 pages, Proceedings of the 5th International Conference on Cloud Computing (IEEE CLOUD 2012), pp. 344-351, Honolulu, Hawaii, USA, June 24-29, 201

    How blockchain impacts cloud-based system performance: a case study for a groupware communication application

    Get PDF
    This paper examines the performance trade-off when implementing a blockchain architecture for a cloud-based groupware communication application. We measure the additional cloud-based resources and performance costs of the overhead required to implement a groupware collaboration system over a blockchain architecture. To evaluate our groupware application, we develop measuring instruments for testing scalability and performance of computer systems deployed as cloud computing applications. While some details of our groupware collaboration application have been published in earlier work, in this paper we reflect on a generalized measuring method for blockchain-enabled applications which may in turn lead to a general methodology for testing cloud-based system performance and scalability using blockchain. Response time and transaction throughput metrics are collected for the blockchain implementation against the non-blockchain implementation and some conclusions are drawn about the additional resources that a blockchain architecture for a groupware collaboration application impose
    corecore