8 research outputs found

    Automatic, load-independent detection of performance regressions by transaction profiles

    Get PDF
    Performance regression testing is an important step in the production process of enterprise applications. Yet, analysing this type of testing data is mainly conducted manually and depends on the load applied during the test. To ease such a manual task we present an automated, load-independent technique to detect performance regression anomalies based on the analysis of performance testing data using a con- cept known as Transaction Pro le. The approach can be automated and it utilises data already available to the per- formance testing along with the queueing network model of the testing system. The presented \Transaction Pro le Run Report"was able to automatically catch performance regression anomalies ca- used by software changes and isolate them from those caused by load variations with a precision of 80% in a case study conducted against an open source application. Hence, by deploying our system, the testing teams are able to detect performance regression anomalies by avoiding the manual approach and eliminating the need to do extra runs with varying load

    Transaction profile estimation of queueing network models for IT systems using a search-based technique

    Get PDF
    The software and hardware systems required to deliver modern Web based services are becoming increasingly complex. Management and evolution of the systems requires periodic analysis of performance and capacity to maintain quality of service and maximise efficient use of resources. In this work we present a method that uses a repeated local search technique to improve the accuracy of modelling such systems while also reducing the complexity and time required to perform this task. The accuracy of the model derived from the search-based approach is validated by extrapolating the performance to multiple load levels which enables system capacity and performance to be planned and managed more efficiently

    Software contention aware queueing network model of three-tier Web systems (Work-In-Progress)

    Get PDF
    Using modelling to predict the performance characteristics of software applications typically uses Queueing Network Models representing the various system hardware resources. Leaving out the software resources, such as the limited number of threads, in such models leads to a reduced prediction accuracy. Accounting for Software Contention is a challenging task as existing techniques to model software components are complex and require deep knowledge of the software architecture. Furthermore, they also require complex measurement processes to obtain the model's service demands. In addition, solving the resultant model usually require simulation solvers which are often time consuming. In this work, we aim to provide a simpler model for three- tier web software systems which accounts for Software Contention that can be solved by time e cient analytical solvers. We achieve this by expanding the existing \Two-Level Iterative Queuing Modelling of Software Contention" method to handle the number of threads at the Application Server tier and the number of Data Sources at the Database Server tier. This is done in a generic manner to allow for extending the solution to other software components like memory and crit- ical sections. Initial results show that our technique clearly outperforms existing techniques

    Profile-based, load-independent anomaly detection and analysis in performance regression testing of software systems

    Get PDF
    Performance evaluation through regression testing is an important step in the software production process. It aims to make sure that the performance of new releases do not regress under a field-like load. The main outputs of regression tests are the metrics that represent the response time of various transactions as well as the resource utilization (CPU, disk I/O and Network). In this paper, we propose to use a concept known as Transaction Profile, which can provide a detailed representation for the transaction in a load independent manner, to detect anomalies through performance test runs. The approach uses data readily available in performance regression tests and a queueing network model of the system under test to infer the Transactions Profiles. Our initial results show that the Transactions Profiles calculated from load regression test data uncover the performance impact of any update to the software. Therefore we conclude that using Transactions Profiles is an effective approach to allow testing teams to easily assure each new software release does not suffer performance regression

    CQE: an approach to automatically estimate the code quality using an objective metric from an empirical study

    Get PDF
    Bugs in a project, at any stage of Software life cycle development are costly and difficult to find and fix. Moreover, the later a bug is found, the more expensive it is to fix. There are static analysis tools to ease the process of finding bugs, but their results are not easy to filter out critical errors and is time consuming to analyze. To solve this problem we used two steps: first to enhance the bugs severity and second is to estimate the code quality, byWeighted Error Code Density metric. Our experiment on 10 widely used open-source Java applications automatically shows their code quality estimated using our objective metric. We also enhance the error ranking of FindBugs, and provide a clear view on the critical errors to fix as well as low priority ones to potentially ignore

    A cost-capacity analysis for assessing the efficiency of heterogeneous computing assets in an enterprise cloud

    Get PDF
    Cloud providers and organizations with a large IT infrastructure manage evolving sets of hardware resources that are subject to continual change. As existing computing assets age, newer, more capable and more efficient ones are generally acquired. Significant variability of hardware components leads to inefficient use of computing assets within the organization. We claim that only a detailed understanding of the whole infrastructure will lead to significant optimizations and savings. In this paper we report results on a dataset of 1,171 assets from two different data centers, on which we present a thorough analysis of how the costs of running a computing asset are related to its resource capacity (i.e., CPU and RAM). This analysis is formalized in a cost model that could be used by organizations to make an optimal decision with regards to which computing assets should migrate their workload (i.e. should be disconnected or discarded) and which ones should receive such workload

    Load balancing of Java applications by forecasting garbage collections

    Get PDF
    Modern computer applications, especially at enterprise-level, are commonly deployed with a big number of clustered instances to achieve a higher system performance, in which case single machine based solutions are less cost-effective. However, how to effectively manage these clustered applications has become a new challenge. A common approach is to deploy a front-end load balancer to optimise the workload distribution between each clustered application. Since then, many research efforts have been carried out to study effective load balancing algorithms which can control the workload based on various resource usages such as CPU and memory. The aim of this paper is to propose a new load balancing approach to improve the overall distributed system performance by avoiding potential performance impacts caused by Major Java Garbage Collection. The experimental results have shown that the proposed load balancing algorithm can achieve a significant higher throughput and lower response time compared to the round-robin approach. In addition, the proposed solution only has a small overhead introduced to the distributed system, where unused resources are available to enable other load balancing algorithms together to achieve a better system performance

    An experimental methodology to evaluate energy efficiency and performance in an enterprise virtualized environment

    Get PDF
    Computing servers generally have a narrow dynamic power range. For instance, even completely idle servers consume between 50% and 70% of their peak power. Since the us- age rate of the server has the main in uence on its power consumption, energy-e ciency is achieved whenever the uti- lization of the servers that are powered on reaches its peak. For this purpose, enterprises generally adopt the following technique: consolidate as many workloads as possible via virtualization in a minimum amount of servers (i.e. maxi- mize utilization) and power down the ones that remain idle (i.e. reduce power consumption). However, such approach can severely impact servers' performance and reliability. In this paper, we propose a methodology to determine the ideal values for power consumption and utilization for a server without performance degradation. We accomplish this through a series of experiments using two typical types of workloads commonly found in enterprises: TPC-H and SPECpower ssj2008 benchmarks. We use the rst to mea- sure the amount of queries responded successfully per hour for di erent numbers of users (i.e. Throughput@Size) in the VM. Moreover, we use the latter to measure the power con- sumption and number of operations successfully handled by a VM at di erent target loads. We conducted experiments varying the utilization level and number of users for di er- ent VMs and the results show that it is possible to reach the maximum value of power consumption for a server, without experiencing performance degradations when running indi- vidual, or mixing workloads
    corecore