4 research outputs found

    An Empirical Study on the Discrepancy between Performance Testing Results from Virtual and Physical Environments

    Get PDF
    Large software systems often undergo performance tests to ensure their capability to handle expected loads. These performance tests often consume large amounts of computing resources and time in order to exercise the system extensively and build confidence on results. Making it worse, the ever evolving field environments require frequent updates to the performance testing environment. In practice, virtual machines (VMs) are widely exploited to provide flexible and less costly environments for performance tests. However, the use of VMs may introduce confounding overhead (e.g., a higher than expected memory utilization with unstable I/O traffic) to the testing environment and lead to unrealistic performance testing results. Yet, little research has studied the impact on test results of using VMs in performance testing activities. In this thesis, we evaluate the discrepancy between the performance testing results from virtual and physical environments. We perform a case study on two open source systems -- namely Dell DVD Store (DS2) and CloudStore. We conduct the same performance tests in both virtual and physical environments and compare the performance testing results based on the three aspects that are typically examined for performance testing results: 1) single performance metric (e.g. CPU usage from virtual environment vs. CPU usage from physical environment), 2) the relationship between two performance metrics (e.g. correlation between CPU usage and I/O traffic) and 3) statistical performance models that are built to predict system performance. Our results show that 1) A single metric from virtual and physical environments do not follow the same distribution, hence practitioners cannot simply use a scaling factor to compare the performance between environments, 2) correlations among performance metrics in virtual environments are different from those in physical environments and 3) statistical models built based on the performance metrics from virtual environments are different from the models built from physical environments suggesting that practitioners cannot use the performance testing results across virtual and physical environments. In order to assist the practitioners leverage performance testing results in both environments, we investigate ways to reduce the discrepancy. We find that such discrepancy may be reduced by normalizing performance metrics based on deviance. Overall, we suggest that practitioners should not use the performance testing results from virtual environment with the simple assumption of a straightforward performance overhead. Instead, practitioners and future research should investigate leveraging normalization techniques to reduce the discrepancy before examining performance testing results from virtual and physical environments

    Scalability performance measurement and testing of cloud-based software services

    Get PDF
    Cloud-based software services have become more popular and dependable and are ideal for businesses with growing or changing workload demands. These services are increasing rapidly due to the reduced hosting costs and the increased availability and efficiency of computing resources. The delivery of cloud-based software services is based on the underlying cloud infrastructure supported by cloud providers, which delivers the potential for scalability that follows the pay-as-you-go model. Performance and scalability testing and measurements of those services are necessary for future optimisations and growth of cloud computing to support the Service Level Agreement (SLA) compliant quality of cloud services, especially in the context of rapidly expanding quantity of service delivery. This thesis addresses an important issue, understanding the scalability of cloud-based software services from a technical perspective, which is very important as more software solutions are migrated to the cloud. A novel testing and quantifying approach for the scalability performance of cloud-based software services is described. Two technical scalability metrics for software services that have been deployed and distributed in cloud environments, have been formulated: volume and quality scalability metrics based on the number of software instances and the average response time. The experimental analysis comprises three stages. The first stage involves demonstrating the approach and the metrics using real-world could-based software service running on Amazon EC2 cloud using three demand scenarios. The second stage aims to extend the practicality of the metrics with experiments on two public cloud environments (Amazon EC2 and Microsoft Azure) with two cloud-based software serices to demonstrate the use of these metrics. The experimental analysis considers three sets of comparisons to provide the platform to construct the metrics as a basis that can be used effectively to compare the scalability of software on cloud environments, consequently supporting deployment decisions with technical arguments. Moreover, the work integrates the technical scalability metrics with an earlier utility-oriented scalability metric. The third stage is a case study of application-level fault inection using real-world cloud-based software services running on Amazon EC2 cloud to demonstrate the effect of fault scenarios on the scalability behaviour. The results show that the technical metrics quantify explicitly the technical scalability performance of the cloud-based software services, and that they allow clear assessment of the impact of demand scenarios, cloud platform and fault injection on the software services’ scalability behaviour. The studies undertaken in this thesis have provided a valuable insight into the scalability of cloud-based software services delivery
    corecore