7,672 research outputs found

    Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration

    Full text link
    Testing in Continuous Integration (CI) involves test case prioritization, selection, and execution at each cycle. Selecting the most promising test cases to detect bugs is hard if there are uncertainties on the impact of committed code changes or, if traceability links between code and tests are not available. This paper introduces Retecs, a new method for automatically learning test case selection and prioritization in CI with the goal to minimize the round-trip time between code commits and developer feedback on failed test cases. The Retecs method uses reinforcement learning to select and prioritize test cases according to their duration, previous last execution and failure history. In a constantly changing environment, where new test cases are created and obsolete test cases are deleted, the Retecs method learns to prioritize error-prone test cases higher under guidance of a reward function and by observing previous CI cycles. By applying Retecs on data extracted from three industrial case studies, we show for the first time that reinforcement learning enables fruitful automatic adaptive test case selection and prioritization in CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017). Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration. In Proceedings of 26th International Symposium on Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC

    History Based Multi Objective Test Suite Prioritization in Regression Testing Using Genetic Algorithm

    Get PDF
    Regression testing is the most essential and expensive testing activity which occurs throughout the software development life cycle. As Regression testing requires executions of many test cases it imposes the necessity of test case prioritization process to reduce the resource constraint. Test case prioritization technique schedule the test case in an order that increase the chance of early fault detection. In this paper we propose a genetic algorithm based prioritization technique which uses the historical information of system level test cases to prioritize test cases to detect most severe faults early. In addition the proposed approach also calculates weight factor for each requirement to achieve customer satisfaction and to improve the rate of severe fault detection. To validate the proposed approach we performed controlled experiments over industry projects which proved the proposed approach effectiveness in terms of average percentage of fault detected

    Assigning Test Priority to Modules Using Code-Content and Bug History

    Get PDF
    Regression testing is a process that is repeated after every change in the program. Prioritization of test cases is an important process during regression test execution. Nowadays, there exist several techniques that decide which of the test cases will run first as per their priority levels, while increasing the probability of finding bugs earlier in the test life cycle. However, sometimes algorithms used to select important test cases may stop searching in local minima while missing the rest of the tests that might be important for a given change. To address this limitation further, we propose a domain-specific model that assigns testing priority to classes in applications based on developers\u27 judgments for priority. Moreover, our technique which takes into consideration applications\u27 code content and bug history, relates these features to overall class priority for testing. In the end, we test the proposed approach with a new (unknown) dataset of 20 instances. The predicted results are compared with developers\u27 priority score and saw that this metric can prioritize correctly 70% of classes under test

    Improving regression testing transparency and efficiency with history-based prioritization – an industrial case study

    Get PDF
    Abstract—Background: History based regression testing was proposed as a basis for automating regression test selection, for the purpose of improving transparency and test efficiency, at the function test level in a large scale software development organization. Aim: The study aims at investigating the current manual regression testing process as well as adopting, implementing and evaluating the effect of the proposed method. Method: A case study was launched including: identification of important factors for prioritization and selection of test cases, implementation of the method, and a quantitative and qualitative evaluation. Results: 10 different factors, of which two are history-based, are identified as important for selection. Most of the information needed is available in the test management and error reporting systems while some is embedded in the process. Transparency is increased through a semi-automated method. Our quantitative evaluation indicates a possibility to improve efficiency, while the qualitative evaluation supports the general principles of history-based testing but suggests changes in implementation details

    Reinforcement Learning for Test Case Prioritization

    Get PDF
    Continuous Integration (CI) significantly reduces integration problems, speeds up development time, and shortens release time. However, it also introduces new challenges for quality assurance activities, including regression testing, which is the focus of this work. Though various approaches for test case prioritization have shown to be very promising in the context of regression testing, specific techniques must be designed to deal with the dynamic nature and timing constraints of CI. Recently, Reinforcement Learning (RL) has shown great potential in various challenging scenarios that require continuous adaptation, such as game playing, real-time ads bidding, and recommender systems. Inspired by this line of work and building on initial efforts in supporting test case prioritization with RL techniques, we perform here a comprehensive investigation of RL-based test case prioritization in a CI context. To this end, taking test case prioritization as a ranking problem, we model the sequential interactions between the CI environment and a test case prioritization agent as an RL problem, using three alternative ranking models. We then rely on carefully selected and tailored state-of-the-art RL techniques to automatically and continuously learn a test case prioritization strategy, whose objective is to be as close as possible to the optimal one. Our extensive experimental analysis shows that the best RL solutions provide a significant accuracy improvement over previous RL-based work, with prioritization strategies getting close to being optimal, thus paving the way for using RL to prioritize test cases in a CI context
    corecore