4 research outputs found

    Applying test case prioritization to software microbenchmarks

    Get PDF
    Regression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative

    Test prioritization in continuous integration environments

    No full text
    Abstract Two heuristics namely diversity-based (DBTP) and history-based test prioritization (HBTP) have been separately proposed in the literature. Yet, their combination has not been widely studied in continuous integration (CI) environments. The objective of this study is to catch regression faults earlier, allowing developers to integrate and verify their changes more frequently and continuously. To achieve this, we investigated six open-source projects, each of which included several builds over a large time period. Findings indicate that previous failure knowledge seems to have strong predictive power in CI environments and can be used to effectively prioritize tests. HBTP does not necessarily need to have large data, and its effectiveness improves to a certain degree with larger history interval. DBTP can be used effectively during the early stages, when no historical data is available, and also combined with HBTP to improve its effectiveness. Among the investigated techniques, we found that history-based diversity using NCD Multiset is superior in terms of effectiveness but comes with relatively higher overhead in terms of method execution time. Test prioritization in CI environments can be effectively performed with negligible investment using previous failure knowledge, and its effectiveness can be further improved by considering dissimilarities among the tests
    corecore