16 research outputs found

    Apache Axis2 Web Services

    No full text
    Create secure, reliable, and easy-to-use web services using Apache Axis

    Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop Benchmarks

    No full text
    Abstract—Hypervisors are widely used in cloud environments and their impact on application performance has been a topic of significant research and practical interest. We conduct experimental measurements of several benchmarks using Hadoop MapReduce to evaluate and compare the performance impact of three popular hypervisors: a commercial hypervisor, Xen, and KVM. We found that differences in the workload type (CPU or I/O intensive), workload size and VM placement yielded significant performance differences among the hypervisors. In our study, we used the three hypervisors to run several MapReduce benchmarks such as Word Count, TestDSFIO, and TeraSort and further validated our observed hypothesis using microbenchmarks. We observed for CPU-bound benchmark, the performance difference between the three hypervisors was negligible; however, significant performance variations were seen for I/O-bound benchmarks. Moreover, adding more virtual machines on the same physical host degraded the performance on all three hypervisors, yet we observed different degradation trends amongst them. Concretely, the commercial hypervisor is 46% faster at TestDFSIO Write than KVM, but 49 % slower in the TeraSort benchmark. In addition, increasing the workload size for TeraSort yielded completion times for CVM that were two times that of Xen and KVM. The performance differences shown between the hypervisors suggests that further analysis and consideration of hypervisors is needed in the future when deploying applications to cloud environments. I

    Variations in Performance and Scalability when Migrating n-Tier Applications to Different Clouds

    Get PDF
    Abstract—The increasing popularity of computing clouds continues to drive both industry and research to provide answers to a large variety of new and challenging questions. We aim to answer some of these questions by evaluating performance and scalability when an n-tier application is migrated from a traditional datacenter environment to an IaaS cloud. We used a representative n-tier macro-benchmark (RUBBoS) and compared its performance and scalability in three differen

    Response Time Reliability in Cloud Environments: An Empirical Study of n-Tier Applications at High Resource Utilization

    No full text
    Abstract—When running mission-critical web-facing applications (e.g., electronic commerce) in cloud environments, predictable response time, e.g., specified as service level agreements (SLA), is a major performance reliability requirement. Through extensive measurements of n-tier application benchmarks in a cloud environment, we study three factors that significantly impact the application response time predictability: bursty workloads (typical of web-facing applications), soft resource management strategies (e.g., global thread pool or local thread pool), and bursts in system software consumption of hardware resources (e.g., Java Virtual Machine garbage collection). Using a set of profit-based performance criteria derived from typical SLAs, we show that response time reliability is brittle, with large response time variations (order of several seconds) depending on each one of those factors. For example, for the same workload and hardware platform, modest increases in workload burstiness may result in profit drops of more than 50%. Our results show that profitbased performance criteria may contribute significantly to the successful delimitation of performance unreliability boundaries and thus support effective management of clouds. Keywords-performance reliability; response time prediction; n-tier; web application; profit model I

    Economical and Robust Provisioning of N-Tier Cloud Workloads: A Multi-level Control Approach

    No full text
    Abstract—Resource provisioning for N-tier web applications in Clouds is non-trivial due to at least two reasons. First, there is an inherent optimization conflict between cost of resources and Service Level Agreement (SLA) compliance. Second, the resource demands of the multiple tiers can be different from each other, and varying along with the time. Resources have to be allocated to multiple (virtual) containers to minimize the total amount of resources while meeting the end-to-end performance requirements for the application. In this paper we address these two challenges through the combination of the resource controllers on both application and container levels. On the application level, a decision maker (i.e., an adaptive feedback controller) determines the total budget of the resources that are required for the application to meet SLA requirements as the workload varies. On the container level, a second controlle
    corecore