519 research outputs found

    Prioritization of combinatorial test cases by incremental interaction coverage

    Get PDF
    Combinatorial testing is a well-recognized testing method, and has been widely applied in practice. To facilitate analysis, a common approach is to assume that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, the order of executing the test cases is critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number on uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases

    Search algorithms for regression test case prioritization

    Get PDF
    Regression testing is an expensive, but important, process. Unfortunately, there may be insufficient resources to allow for the re-execution of all test cases during regression testing. In this situation, test case prioritisation techniques aim to improve the effectiveness of regression testing, by ordering the test cases so that the most beneficial are executed first. Previous work on regression test case prioritisation has focused on Greedy Algorithms. However, it is known that these algorithms may produce sub-optimal results, because they may construct results that denote only local minima within the search space. By contrast, meta-heuristic and evolutionary search algorithms aim to avoid such problems. This paper presents results from an empirical study of the application of several greedy, meta-heuristic and evolutionary search algorithms to six programs, ranging from 374 to 11,148 lines of code for 3 choices of fitness metric. The paper addresses the problems of choice of fitness metric, characterisation of landscape modality and determination of the most suitable search technique to apply. The empirical results replicate previous results concerning Greedy Algorithms. They shed light on the nature of the regression testing search space, indicating that it is multi-modal. The results also show that Genetic Algorithms perform well, although Greedy approaches are surprisingly effective, given the multi-modal nature of the landscape

    Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation

    Full text link
    Combinatorial interaction testing is an important software testing technique that has seen lots of recent interest. It can reduce the number of test cases needed by considering interactions between combinations of input parameters. Empirical evidence shows that it effectively detects faults, in particular, for highly configurable software systems. In real-world software testing, the input variables may vary in how strongly they interact, variable strength combinatorial interaction testing (VS-CIT) can exploit this for higher effectiveness. The generation of variable strength test suites is a non-deterministic polynomial-time (NP) hard computational problem \cite{BestounKamalFuzzy2017}. Research has shown that stochastic population-based algorithms such as particle swarm optimization (PSO) can be efficient compared to alternatives for VS-CIT problems. Nevertheless, they require detailed control for the exploitation and exploration trade-off to avoid premature convergence (i.e. being trapped in local optima) as well as to enhance the solution diversity. Here, we present a new variant of PSO based on Mamdani fuzzy inference system \cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive selection of its global and local search operations. We detail the design of this combined algorithm and evaluate it through experiments on multiple synthetic and benchmark problems. We conclude that fuzzy adaptive selection of global and local search operations is, at least, feasible as it performs only second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the best mean test suite size, the fuzzy adaptation even outperforms DPSO occasionally. We discuss the reasons behind this performance and outline relevant areas of future work.Comment: 21 page

    New metrics for prioritized interaction test suites

    Get PDF
    Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies

    Combinatorial-Based Prioritization for User-Session-Based Test Suites

    Get PDF
    Software defects caused by inadequate software testing can cost billions of dollars. Further, web application defects can be costly due to the fact that most web applications handle constant user interaction. However, software testing is often under time and budget constraints. By improving the time efficiency of software testing, many of the costs associated with defects can be saved. Current methods for web application testing can take too long to generate test suites. In addition, studies have shown that user-session-based test suites often find faults missed by other testing techniques. This project addresses this problem by utilizing existing user sessions for web application testing. The software testing method provided within this project utilizes previous knowledge about combinatorial coverage testing and improves time and computer memory efficiency by only considering test cases that exist in a user-session based test suite. The method takes the existing test suite and prioritizes the test cases based on a specific combinatorial criterion. In addition, this project presents an empirical study examining the application of the newly proposed combinatorial prioritization algorithm on an existing web application

    Aggregate-strength interaction test suite prioritization

    Get PDF
    Combinatorial interaction testing is a widely used approach. In testing, it is often assumed that all combinatorial test cases have equal fault detection capability, however it has been shown that the execution order of an interaction test suite's test cases may be critical, especially when the testing resources are limited. To improve testing cost-effectiveness, test cases in the interaction test suite can be prioritized, and one of the best-known categories of prioritization approaches is based on “fixed-strength prioritization”, which prioritizes an interaction test suite by choosing new test cases which have the highest uncovered interaction coverage at a fixed strength (level of interaction among parameters). A drawback of these approaches, however, is that, when selecting each test case, they only consider a fixed strength, not multiple strengths. To overcome this, we propose a new “aggregate-strength prioritization”, to combine interaction coverage at different strengths. Experimental results show that in most cases our method performs better than the test-case-generation, reverse test-case-generation, and random prioritization techniques. The method also usually outperforms “fixed-strength prioritization”, while maintaining a similar time cost

    A Comparison of Test Case Prioritization Criteria for Software Product Lines

    Get PDF
    Software Product Line (SPL) testing is challenging due to the potentially huge number of derivable products. To alleviate this problem, numerous contributions have been proposed to reduce the number of products to be tested while still having a good coverage. However, not much attention has been paid to the order in which the products are tested. Test case prioritization techniques reorder test cases to meet a certain performance goal. For instance, testers may wish to order their test cases in order to detect faults as soon as possible, which would translate in faster feedback and earlier fault correction. in this paper, we explore the applicability of test case prioritization techniques to SPL testing. We propose five different prioritization criteria based on common metrics of feature models and we compare their effectiveness in increasing the rate of early fault detection, i.e. a measure of how quickly faults are detected. The results show that different orderings of the same SPL suite may lead to significant differences in the rate of early fault detection. They also show that our approach may contribute to accelerate the detection of faults of SPL test suites based on combinatorial testingMinisterio de Ciencia e InnovaciĂłn TIN2009-07366 (SETI)Ministerio de EconomĂ­a y Competitividad TIN2012-32273Junta de AndalucĂ­a P10-TIC-590

    Input Modeling Prioritization Using Statistically User Profile for Pairwise Test Case Generation with Constraints Handling

    Get PDF
    Pairwise testing is a widely used technique for software testing with reduce size of the test suite and able to detect interactions that trigger the system’s faults. In addition, pairwise testing test suites must be able to deal with constraints between input parameters and values. In current practice, selecting input parameters and values usually depends on tester skills that might not be sufficient. Input parameters and values modeling and tools for easily guiding and prioritizing the selection of optimal input parameters and values for the SUT is also required. In this work, we present an approach for prioritizing input parameters and values modeling using statistical user profile. Our approach is implemented in a tool called UPPTCT which provides ability to handle constraints on input parameters and values for pairwise testing in order to generate test cases. We conduct experiments to evaluate test case effectiveness and compare our tool with other renowned pairwise test generation and constraints handling tools. The experimental results show that the effectiveness of our approach is significantly more efficient and effective than random testing as large portion of reported defects with regard to statically user profile were caught by our approach. Furthermore, our tool performs better in some cases and performs comparable results for generating test cases upon input parameters and values for both with constraints handling and without constraints handling.Pairwise testing is a widely used technique for software testing with the reduced size of the test suite and able to detect interactions that trigger the system’s faults. In addition, pairwise testing test suites must be able to take constraints between input parameters and parameter values into account. In current practice, identifying and selecting input parameters and parameter values usually depends on tester skills that might not be sufficient. Input parameters and parameter values modeling and tools for easily guiding and prioritizing the selection of optimal input parameters and parameter values for the SUT is also required. In this work, we present an approach for prioritizing input parameters and parameter values modeling using statistical user profile. Our approach is implemented in a tool called UPPTCT which provides the ability to handle constraints on input parameters and parameter values for pairwise testing in order to generate test cases. We conduct experiments to evaluate test case effectiveness and compare our tool with other renowned pairwise test generation and constraints handling tools. The experimental results show that the effectiveness of our approach is significantly more efficient and effective than random testing as a large portion of reported defects with regard to statically user profile were caught by our approach. Furthermore, our tool performs better in some cases and performs comparable results for generating test cases upon input parameters and parameter values for both with constraints handling and without constraints handling

    Practical Combinatorial Interaction Testing: Empirical Findings on Efficiency and Early Fault Detection

    Get PDF
    Combinatorial interaction testing (CIT) is important because it tests the interactions between the many features and parameters that make up the configuration space of software systems. Simulated Annealing (SA) and Greedy Algorithms have been widely used to find CIT test suites. From the literature, there is a widely-held belief that SA is slower, but produces more effective tests suites than Greedy and that SA cannot scale to higher strength coverage. We evaluated both algorithms on seven real-world subjects for the well-studied two-way up to the rarely-studied six-way interaction strengths. Our findings present evidence to challenge this current orthodoxy: real-world constraints allow SA to achieve higher strengths. Furthermore, there was no evidence that Greedy was less effective (in terms of time to fault revelation) compared to SA; the results for the greedy algorithm are actually slightly superior. However, the results are critically dependent on the approach adopted to constraint handling. Moreover, we have also evaluated a genetic algorithm for constrained CIT test suite generation. This is the first time strengths higher than 3 and constraint handling have been used to evaluate GA. Our results show that GA is competitive only for pairwise testing for subjects with a small number of constraints

    Using food-web theory to conserve ecosystems

    Get PDF
    © 2016, Nature Publishing Group. All rights reserved.Food-web theory can be a powerful guide to the management of complex ecosystems. However, we show that indices of species importance common in food-web and network theory can be a poor guide to ecosystem management, resulting in significantly more extinctions than necessary. We use Bayesian Networks and Constrained Combinatorial Optimization to find optimal management strategies for a wide range of real and hypothetical food webs. This Artificial Intelligence approach provides the ability to test the performance of any index for prioritizing species management in a network. While no single network theory index provides an appropriate guide to management for all food webs, a modified version of the Google PageRank algorithm reliably minimizes the chance and severity of negative outcomes. Our analysis shows that by prioritizing ecosystem management based on the network-wide impact of species protection rather than species loss, we can substantially improve conservation outcomes
    • …
    corecore