15,396 research outputs found

    Prioritization of Re-executable Test Cases of Activity Diagram in Regression Testing Using Model Based Environment

    Get PDF
    As we all know, software testing is of vital importance in software development life cycle (SDLC) to validate the new versions of the software and detection of faults. Regression Testing, however concentrates on generating test cases on changed part of the software to detect faults more earlier than any other testing practices. In case of model based testing approach, testing is performed using top-down method (black box method) and design models of the software, for example, UML diagrams. UML diagrams gives us requirement level representation of the software in graphical format which is now a days a standard used in software engineering. In our proposed approach, we have derived a new technique which has never been used before to prioritize the test cases in model based environment. In this technique, we have used activity diagram as an input to the system. Activity diagram is used basically because it gives us the complete flow of each and every activity involved in the system and represents its complete working. Activity diagram is further changed as the requirement changes, each time, when the changes happen, they are recorded and test cases are generated for the changed diagram, test cases are also generated for the original diagram. Test cases for both the diagrams are compared and classified as re-usable and re-executable test cases. Re-usable test cases are those that remain unchanged during requirement changes and re-executable test cases belong to the changed part of the diagram. Then re-executable test cases are prioritized using one heuristic algorithm based on ACT(Activity Connector) table. Now, the question is why to prioritize only the re-executable test cases. Because, any how we have to execute re-usable test cases, as they remain same for both the versions of the diagram and are already tested when original diagram was made. But, re-executable test cases are never been tested and may detect faults in the modified design quickly and by prioritizing them we can also reduce the execution time of the test cases which will give us effective testing performance and will evolve a better new version of the software. All the existing prioritization techniques are either code based or are using various tool supports. Code based techniques are too complex and tedious because for a small change in code, we need to test whole application repeatedly. And in case of tool support, we have multiple assumptions and constraints to be followed. This proposed technique will surely give better results and as the type of technique has never been used before will also prove very effective. DOI: 10.17762/ijritcc2321-8169.15077

    Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration

    Full text link
    Testing in Continuous Integration (CI) involves test case prioritization, selection, and execution at each cycle. Selecting the most promising test cases to detect bugs is hard if there are uncertainties on the impact of committed code changes or, if traceability links between code and tests are not available. This paper introduces Retecs, a new method for automatically learning test case selection and prioritization in CI with the goal to minimize the round-trip time between code commits and developer feedback on failed test cases. The Retecs method uses reinforcement learning to select and prioritize test cases according to their duration, previous last execution and failure history. In a constantly changing environment, where new test cases are created and obsolete test cases are deleted, the Retecs method learns to prioritize error-prone test cases higher under guidance of a reward function and by observing previous CI cycles. By applying Retecs on data extracted from three industrial case studies, we show for the first time that reinforcement learning enables fruitful automatic adaptive test case selection and prioritization in CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017). Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration. In Proceedings of 26th International Symposium on Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page
    • …
    corecore