12,699 research outputs found

    Using symbolic execution for equivalent mutant detection

    Get PDF
    Mutation Testing is a fault injection technique used to measure test adequacy score by generating defects (mutations) in a program and checking if its test suite is able to detect such a change. However, this technique suffers from the Equivalent Mutant Problem. Equivalent mutants are mutants which on mutation retain their semantics. Thus, although equivalent mutants are syntactically different, they remain semantically equivalent to the original program. An automated solution which decides equivalence is impossible, as equivalence of non-trivial programs is undecidable. The fact that the Equivalent Mutant Problem is undecidable usually means that human effort is required to decide equivalence. Equivalent mutants are the barrier keeping Mutation Testing from being widely adopted. Moreover, in one study by Irvine et al, the average time taken for each manual mutant classification was fifteen minutes.peer-reviewe

    Adequate Test Data Generation using Evolutionary Algorithms

    Get PDF
    Software Testing is a approach where different errors and bugs in the software are identified. To test a software we need the test data. In this thesis, we have developed the approach to generate test data automatically from some initial random test data using Evolutionary Algorithms (EA) and test the software to detect the presence of errors, if any. We have taken two measures, they are path coverage and adequacy criterion to test the validation of our approach. In our first approach, we have used simple Genetic Algorithm (GA) to find the test data. We then used an memtic algorithm to curb the difficulties faced by using GA. We are using the instrumented program to find the paths. We then represent the program into a Control Flow Graph (CFG). We have used genetic algorithm to find the more optimal test data that covers all the feasible test paths from some initial random test data automatically. Path coverage based testing approach generates reliable test cases. A test case set is reliable if it's execution ensures that the program is correct on all its inputs. But, Adequacy requires that the test case set detect faults rather than show correctness. Hence, for adequacy based testing we uses the concept of mutation analysis. Here, we have taken the mutation score as our fitness function in the approach. We find out the mutation score from using mutation testing based tool called "MuJava". And then generate test data accordingly. We applied a more complex hybrid approach to generate test data. This algorithm is a hybrid version of genetic algorithm. It produces better results than the results generated by using GA. Also it curbs various problems faced by G

    Empirical Evaluation of Mutation-based Test Prioritization Techniques

    Full text link
    We propose a new test case prioritization technique that combines both mutation-based and diversity-based approaches. Our diversity-aware mutation-based technique relies on the notion of mutant distinguishment, which aims to distinguish one mutant's behavior from another, rather than from the original program. We empirically investigate the relative cost and effectiveness of the mutation-based prioritization techniques (i.e., using both the traditional mutant kill and the proposed mutant distinguishment) with 352 real faults and 553,477 developer-written test cases. The empirical evaluation considers both the traditional and the diversity-aware mutation criteria in various settings: single-objective greedy, hybrid, and multi-objective optimization. The results show that there is no single dominant technique across all the studied faults. To this end, \rev{we we show when and the reason why each one of the mutation-based prioritization criteria performs poorly, using a graphical model called Mutant Distinguishment Graph (MDG) that demonstrates the distribution of the fault detecting test cases with respect to mutant kills and distinguishment

    Evaluating Random Mutant Selection at Class-Level in Projects with Non-Adequate Test Suites

    Full text link
    Mutation testing is a standard technique to evaluate the quality of a test suite. Due to its computationally intensive nature, many approaches have been proposed to make this technique feasible in real case scenarios. Among these approaches, uniform random mutant selection has been demonstrated to be simple and promising. However, works on this area analyze mutant samples at project level mainly on projects with adequate test suites. In this paper, we fill this lack of empirical validation by analyzing random mutant selection at class level on projects with non-adequate test suites. First, we show that uniform random mutant selection underachieves the expected results. Then, we propose a new approach named weighted random mutant selection which generates more representative mutant samples. Finally, we show that representative mutant samples are larger for projects with high test adequacy.Comment: EASE 2016, Article 11 , 10 page

    MESSI: Mutant Evaluation by Static Semantic Interpretation

    Full text link
    Abstract—Mutation testing is effective at measuring the adequacy of a test suite, but it can be computationally expensive to apply all the test cases to each mutant. Previous research has investigated the effect of reducing the number of mutants by selecting certain operators, sampling mutants at random, or combining them to form new higher-order mutants. In this paper, we propose a new approach to the mutant reduction problem using static analysis. Symbolic representations are generated for the output along the paths through each mutant and these are compared with the original program. By calcu-lating the range of their output expressions, it is possible to determine the effect of each mutation on the program output. Mutants with little effect on the output are harder to kill. We confirm this using random testing and an established test suite. Competent programmers are likely to only make small mistakes in their programming code. We argue therefore that test suites should be evaluated against those mutants that are harder to kill without being equivalent to the original program. Keywords-mutation testing; sampling; static analysis; I
    corecore