20 research outputs found

    Prioritization of Re-executable Test Cases of Activity Diagram in Regression Testing Using Model Based Environment

    Get PDF
    As we all know, software testing is of vital importance in software development life cycle (SDLC) to validate the new versions of the software and detection of faults. Regression Testing, however concentrates on generating test cases on changed part of the software to detect faults more earlier than any other testing practices. In case of model based testing approach, testing is performed using top-down method (black box method) and design models of the software, for example, UML diagrams. UML diagrams gives us requirement level representation of the software in graphical format which is now a days a standard used in software engineering. In our proposed approach, we have derived a new technique which has never been used before to prioritize the test cases in model based environment. In this technique, we have used activity diagram as an input to the system. Activity diagram is used basically because it gives us the complete flow of each and every activity involved in the system and represents its complete working. Activity diagram is further changed as the requirement changes, each time, when the changes happen, they are recorded and test cases are generated for the changed diagram, test cases are also generated for the original diagram. Test cases for both the diagrams are compared and classified as re-usable and re-executable test cases. Re-usable test cases are those that remain unchanged during requirement changes and re-executable test cases belong to the changed part of the diagram. Then re-executable test cases are prioritized using one heuristic algorithm based on ACT(Activity Connector) table. Now, the question is why to prioritize only the re-executable test cases. Because, any how we have to execute re-usable test cases, as they remain same for both the versions of the diagram and are already tested when original diagram was made. But, re-executable test cases are never been tested and may detect faults in the modified design quickly and by prioritizing them we can also reduce the execution time of the test cases which will give us effective testing performance and will evolve a better new version of the software. All the existing prioritization techniques are either code based or are using various tool supports. Code based techniques are too complex and tedious because for a small change in code, we need to test whole application repeatedly. And in case of tool support, we have multiple assumptions and constraints to be followed. This proposed technique will surely give better results and as the type of technique has never been used before will also prove very effective. DOI: 10.17762/ijritcc2321-8169.15077

    Estudio de la Efectividad de Tres T茅cnicas de Evaluaci贸n de C贸digo: Resultados de una Serie de Experimentos.

    Full text link
    Hasta la fecha se han evaluado distintas t茅cnicas de verificaci贸n y validaci贸n te贸rica y emp铆ricamente. La mayor铆a de las evaluaciones emp铆ricas se han llevado a cabo sin sujetos, abstrayendo el efecto del sujeto sobre la t茅cnica a la hora de aplicarla. Hemos evaluado mediante un experimento con sujetos la efectividad de tres t茅cnicas de verificaci贸n y validaci贸n de c贸digo: partici贸n en clases de equivalencia, cobertura de decisi贸n y lectura de c贸digo mediante abstracciones sucesivas, estudiando la capacidad de las t茅cnicas para la detecci贸n de fallos en tres programas distintos. Hemos replicado el experimento ocho veces en cuatro entornos distintos. Los resultados arrojan diferencias entre las t茅cnicas y se帽alan variables contextuales del proyecto software que deber铆an considerarse cuando se quiera elegir o aplicar una t茅cnica de verificaci贸n y validaci贸n

    Comparing the effectiveness of equivalence partitioning, branch testing and code reading by stepwise abstraction applied by subjects

    Get PDF
    Some verification and validation techniques have been evaluated both theoretically and empirically. Most empirical studies have been conducted without subjects, passing over any effect testers have when they apply the techniques. We have run an experiment with students to evaluate the effectiveness of three verification and validation techniques (equivalence partitioning, branch testing and code reading by stepwise abstraction). We have studied how well able the techniques are to reveal defects in three programs. We have replicated the experiment eight times at different sites. Our results show that equivalence partitioning and branch testing are equally effective and better than code reading by stepwise abstraction. The effectiveness of code reading by stepwise abstraction varies significantly from program to program. Finally, we have identified project contextual variables that should be considered when applying any verification and validation technique or to choose one particular technique

    Regression Testing and Test selection in Research

    Get PDF
    ABSTRACT Regression testing is a costly but crucial problem in software development. Both the research community and the industry have paid much attention to this problem. However, are the issues they concerned the same? The paper try to do the survey of current research on regression testing and current practice in industry and also try to find out whether there are gaps between them. The observations show that although some issues are concerned both by the research community and the industry gay, there do exist gaps. Keywords Regression Testing, Software Engineering, Software Maintenance The goal of this project is to address the gaps between current research and current application of regression in industry by studying the current research on regressing testing and the literatures on existing testing tools which are widely used in the industry. Once the gaps are addressed, we will be able to ask the question---why this happens and new research direction and new guideline for the practice will be proposed. The rest of the paper is organized as following: Section 2 is a briefly review of the current regression testing research literatures. In Section 3, the current popular commercial tools will first be introduced, and then some case studies are presented as they are some practices to apply the regression testing technology. Some interesting observations have been found according to Section 2 and Section 3, and they are presented in Section 4. Conclusions are given In Section 5

    A systematic review on regression test selection techniques

    Get PDF
    Regression testing is verifying that previously functioning software remains after a change. With the goal of finding a basis for further research in a joint industry-academia research project, we conducted a systematic review of empirical evaluations of regression test selection techniques. We identified 27 papers reporting 36 empirical studies, 21 experiments and 15 case studies. In total 28 techniques for regression test selection are evaluated. We present a qualitative analysis of the findings, an overview of techniques for regression test selection and related empirical evidence. No technique was found clearly superior since the results depend on many varying factors. We identified a need for empirical studies where concepts are evaluated rather than small variations in technical implementations

    Estudio de la Efectividad de Tres T茅cnicas de Evaluaci贸n de C贸digo: Resultados de una Serie de Experimentos

    Get PDF
    Hasta la fecha se han evaluado distintas t茅cnicas de verificaci贸n y validaci贸n te贸rica y emp铆ricamente. La mayor铆a de las evaluaciones emp铆ricas se han llevado a cabo sin sujetos, abstrayendo el efecto del sujeto sobre la t茅cnica a la hora de aplicarla. Hemos evaluado mediante un experimento con sujetos la efectividad de tres t茅cnicas de verificaci贸n y validaci贸n de c贸digo: partici贸n en clases de equivalencia, cobertura de decisi贸n y lectura de c贸digo mediante abstracciones sucesivas, estudiando la capacidad de las t茅cnicas para la detecci贸n de fallos en tres programas distintos. Hemos replicado el experimento ocho veces en cuatro entornos distintos. Los resultados arrojan diferencias entre las t茅cnicas y se帽alan variables contextuales del proyecto software que deber铆an considerarse cuando se quiera elegir o aplicar una t茅cnica de verificaci贸n y validaci贸n.Ministerio de Ciencia e Innovaci贸n TIN2008-0055

    Carving differential unit test cases from system test cases

    Full text link
    Unit test cases are focused and efficient. System tests are effective at exercising complex usage patterns. Differential unit tests (DUT) are a hybrid of unit and system tests. They are generated by carving the system components, while executing a system test case, that influence the behavior of the target unit, and then re-assembling those components so that the unit can be exercised as it was by the system test. We conjecture that DUTs retain some of the advantages of unit tests, can be automatically and inexpensively generated, and have the potential for revealing faults related to intricate system executions. In this paper we present a framework for automatically carving and replaying DUTs that accounts for a wide-variety of strategies, we implement an instance of the framework with several techniques to mitigate test cost and enhance flexibility, and we empirically assess the efficacy of carving and replaying DUTs. 1
    corecore