16 research outputs found

    Test case prioritization using test case diversification and fault-proneness estimations

    Full text link
    Context: Regression testing activities greatly reduce the risk of faulty software release. However, the size of the test suites grows throughout the development process, resulting in time-consuming execution of the test suite and delayed feedback to the software development team. This has urged the need for approaches such as test case prioritization (TCP) and test-suite reduction to reach better results in case of limited resources. In this regard, proposing approaches that use auxiliary sources of data such as bug history can be interesting. Objective: Our aim is to propose an approach for TCP that takes into account test case coverage data, bug history, and test case diversification. To evaluate this approach we study its performance on real-world open-source projects. Method: The bug history is used to estimate the fault-proneness of source code areas. The diversification of test cases is preserved by incorporating fault-proneness on a clustering-based approach scheme. Results: The proposed methods are evaluated on datasets collected from the development history of five real-world projects including 357 versions in total. The experiments show that the proposed methods are superior to coverage-based TCP methods. Conclusion: The proposed approach shows that improvement of coverage-based and fault-proneness based methods is possible by using a combination of diversification and fault-proneness incorporation

    Prioritizing MCDC test cases by spectral analysis of Boolean functions

    Get PDF
    Test case prioritization aims at scheduling test cases in an order that improves some performance goal. One performance goal is a measure of how quickly faults are detected. Such prioritization can be performed by exploiting the Fault Exposing Potential (FEP) parameters associated to the test cases. FEP is usually approximated by mutation analysis under certain fault assumptions. Although this technique is effective, it could be relatively expensive compared to the other prioritization techniques. This study proposes a cost-effective FEP approximation for prioritizing Modified Condition Decision Coverage (MCDC) test cases. A strict negative correlation between the FEP of a MCDC test case and the influence value of the associated input condition allows to order the test cases easily without the need of an extensive mutation analysis. The method is entirely based on mathematics and it provides useful insight into how spectral analysis of Boolean functions can benefit software testing

    Hybrid and dynamic static criteria models for test case prioritization of web application regression testing

    Get PDF
    In software testing domain, different techniques and approaches are used to support the process of regression testing in an effective way. The main approaches include test case minimization, test case selection, and test case prioritization. Test case prioritization techniques improve the performance of regression testing by arranging test cases in such a way that maximize fault detection could be achieved in a shorter time. However, the problems for web testing are the timing for executing test cases and the number of fault detected. The aim of this study is to increase the effectiveness of test case prioritization by proposing an approach that could detect faults earlier at a shorter execution time. This research proposed an approach comprising two models: Hybrid Static Criteria Model (HSCM) and Dynamic Weighting Static Criteria Model (DWSCM). Each model applied three criteria: most common HTTP requests in pages, length of HTTP request chains, and dependency of HTTP requests. These criteria are used to prioritize test cases for web application regression testing. The proposed HSCM utilized clustering technique to group test cases. A hybridized technique was proposed to prioritize test cases by relying on assigned test case priorities from the combination of aforementioned criteria. A dynamic weighting scheme of criteria for prioritizing test cases was used to increase fault detection rate. The findings revealed that, the models comprising enhanced of Average Percentage Fault Detection (APFD), yielded the highest APFD of 98% in DWSCM and 87% in HSCM, which have led to improve effectiveness prioritization models. The findings confirmed the ability of the proposed techniques in improving web application regression testing

    Test case prioritization technique based on string distance metrics

    Get PDF
    Numerous test case prioritization (TCP) approaches have been introduced to enhance the test viability in software testing activity with the goal to maximize early average percentage fault detection (APFD). There are different approaches and the process for each approach varies. Furthermore, these approaches are not well documented within the single TCP approach. Based on current studies, having an approach that has high coverage effectiveness (CE) and APFD rate, remains a challenge in TCP. The string-based approach is known to have a single string distance based metric to differentiate test cases that can improve the CE results. However, to differentiate precisely the test cases, the string distances require enhancement. Therefore, a TCP technique based on string distance metric was developed to improve CE and APFD rate. In this research, to differentiate precisely the test cases and counter the string distances problem, an enhanced string distances based metric with a string weight based metric was introduced. Then, the metric was executed under designed process for string-based approach for complete evaluation. Experimental results showed that the enhanced string metric had the highest APFD with 98.56% and highest CE with 69.82% in Siemen dataset, cstcas. Besides, the technique yielded the highest APFD with 76.38% in Robotic Wheelchair System (RWS) case study. As a conclusion, the enhanced TCP technique with weight based metric has prioritised the test case based on their occurrences which helped to differentiate precisely the test cases, and improved the overall scores of APFD and CE

    Applying test case prioritization to software microbenchmarks

    Full text link
    Regression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative

    Weighted string distance approach based on modified clustering technique for optimizing test case prioritization

    Get PDF
    Numerous test case prioritization (TCP) approaches have been introduced to enhance the test viability in software testing activity with the goal to maximize early average percentage fault detection (APFD). String based approach had shown that applying a single string distance-based metric to differentiate the test cases can improve the APFD and coverage rate (CR) results. However, to precisely differentiate the test cases in regression testing, the string approach still requires an enhancement as it lacks priority criteria. Therefore, a study on how to effectively cluster and prioritize test cases through string-based approach is conducted. To counter the string distances problem, weighted string distances is introduced. A further enhancement was made by tuning the weighted string metric with K-Means clustering and prioritization using Firefly Algorithm (FA) technique for the TCP approach to become more flexible in manipulating available information. Then, the combination of the weighted string distances along with clustering and prioritization is executed under the designed process for a new weighted string distances-based approach for complete evaluation. The experimental results show that all the weighted string distances obtained better results compared to its single string metric with average APFD values 95.73% and CR values 61.80% in cstcas Siemen dataset. As for the proposed weighted string distances approach with clustering techniques for regression testing, the combination obtained better results and flexibility than the conventional string approach. In addition, the proposed approach also passed statistical assessment by obtaining p-value higher than 0.05 in Shapiro-Wilk’s normality test and p-value lower than 0.05 in Tukey Kramer Post Hoc tests. In conclusion, the proposed weighted string distances approach improves the overall score of APFD and CE and provides flexibility in the TCP approach for regression testing environment

    Applying test case prioritization to software microbenchmarks

    Get PDF
    Regression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative
    corecore