1,574 research outputs found

    Improvements to Test Case Prioritisation considering Efficiency and Effectiveness on Real Faults

    Get PDF
    Despite the best efforts of programmers and component manufacturers, software does not always work perfectly. In order to guard against this, developers write test suites that execute parts of the code and compare the expected result with the actual result. Over time, test suites become expensive to run for every change, which has led to optimisation techniques such as test case prioritisation. Test case prioritisation reorders test cases within the test suite with the goal of revealing faults as soon as possible. Test case prioritisation has received a lot of research that has indicated that prioritised test suites can reveal faults faster, but due to a lack of real fault repositories available for research, prior evaluations have often been conducted on artificial faults. This thesis aims to investigate whether the use of artificial faults represents a threat to the validity of previous studies, and proposes new strategies for test case prioritisation that increase the effectiveness of test case prioritisation on real faults. This thesis conducts an empirical evaluation of existing test case prioritisation strategies on real and artificial faults, which establishes that artificial faults provide unreliable results for real faults. The study found that there are four occasions on which a strategy for test case prioritisation would be considered no better than the baseline when using one fault type, but would be considered a significant improvement over the baseline when using the other. Moreover, this evaluation reveals that existing test case prioritisation strategies perform poorly on real faults, with no strategies significantly outperforming the baseline. Given the need to improve test case prioritisation strategies for real faults, this thesis proceeds to consider other techniques that have been shown to be effective on real faults. One such technique is defect prediction, a technique that provides estimates that a class contains a fault. This thesis proposes a test case prioritisation strategy, called G-Clef, that leverages defect prediction estimates to reorder test suites. While the evaluation of G-Clef indicates that it outperforms existing test case prioritisation strategies, the average predicted location of a faulty class is 13% of all classes in a system, which shows potential for improvement. Finally, this thesis conducts an investigative study as to whether sentiments expressed in commit messages could be used to improve the defect prediction element of G-Clef. Throughout the course of this PhD, I have created a tool called Kanonizo, an open-source tool for performing test case prioritisation on Java programs. All of the experiments and strategies used in this thesis were implemented into Kanonizo

    Prioritization of combinatorial test cases by incremental interaction coverage

    Get PDF
    Combinatorial testing is a well-recognized testing method, and has been widely applied in practice. To facilitate analysis, a common approach is to assume that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, the order of executing the test cases is critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number on uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases

    Applying risk based testing methodology in mastercard regression testing activity for mastercard project

    Get PDF
    The purpose of this project is to implement Risk Based Testing Methodology in MasterCard Regression Suite. This project is to address concern by MasterCard counterpart due constant increase in regression fixed cost. The ever growing regression test suite requires more people to work on the testing which lead to the increasing of the fixed cost. To overcome this problem we decided to implement Risk Based Testing Methodology as one method to define priority test cases from the existing regression suite. The Risk Based Testing approach can be used to categorize the test cases based on its criticality and priority which can be defined using Risk Exposure Factor. Risk Based testing analysis will be performed on MasterCard Regression Testing to identify the critical and complex scenarios then prioritize those scenarios with the appropriate weightages. The criticality and prioritization criteria would consider criteria’s like Frequency of Groups Failure based on the defects history, Business criticality and Functionality/Services introduced on the specific release. Identification and Review of Functional Specifications of Business as Usual (BAU) before the release started. An initial estimation on the time and effort will be made through number of scripts from each BAUs. Once weightages assigned by each analyst, a final ranking will be done through a simple multiplication of all weight-ages against a particular requirement. The Risk Exposure Factor is computed by multiplying Impact, Probability and Dependency factors considering the mentioned criteria weight-ages. Test cases with high Risk Exposure Factor will be called as targeted test cases. The implementation of Risk Based Testing in MasterCard Regression suite is proven to effectively find defects and reduced time consumption in execution. 100% of the defects found during 15Q3 release execution through targeted test cases with 25% effort saving which lead to 25% reduced in cost. In addition Risk Based Regression suite is used whenever there is a time constraint in execution and as a smoke or sanity testing from a system

    Applying risk based testing methodology in mastercard regression testing activity for mastercard project

    Get PDF
    The purpose of this project is to implement Risk Based Testing Methodology in MasterCard Regression Suite. This project is to address concern by MasterCard counterpart due constant increase in regression fixed cost. The ever growing regression test suite requires more people to work on the testing which lead to the increasing of the fixed cost. To overcome this problem we decided to implement Risk Based Testing Methodology as one method to define priority test cases from the existing regression suite. The Risk Based Testing approach can be used to categorize the test cases based on its criticality and priority which can be defined using Risk Exposure Factor. Risk Based testing analysis will be performed on MasterCard Regression Testing to identify the critical and complex scenarios then prioritize those scenarios with the appropriate weightages. The criticality and prioritization criteria would consider criteria’s like Frequency of Groups Failure based on the defects history, Business criticality and Functionality/Services introduced on the specific release. Identification and Review of Functional Specifications of Business as Usual (BAU) before the release started. An initial estimation on the time and effort will be made through number of scripts from each BAUs. Once weightages assigned by each analyst, a final ranking will be done through a simple multiplication of all weight-ages against a particular requirement. The Risk Exposure Factor is computed by multiplying Impact, Probability and Dependency factors considering the mentioned criteria weight-ages. Test cases with high Risk Exposure Factor will be called as targeted test cases. The implementation of Risk Based Testing in MasterCard Regression suite is proven to effectively find defects and reduced time consumption in execution. 100% of the defects found during 15Q3 release execution through targeted test cases with 25% effort saving which lead to 25% reduced in cost. In addition Risk Based Regression suite is used whenever there is a time constraint in execution and as a smoke or sanity testing from a system

    Leveraging user-session data to support Web application testing

    Full text link

    Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment

    Get PDF
    A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models

    A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    Get PDF
    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage informatio

    COLLABORATIVE TESTING ACROSS SHARED SOFTWARE COMPONENTS

    Get PDF
    Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced
    • …
    corecore