101 research outputs found

    Test Case Purification for Improving Fault Localization

    Get PDF
    Finding and fixing bugs are time-consuming activities in software development. Spectrum-based fault localization aims to identify the faulty position in source code based on the execution trace of test cases. Failing test cases and their assertions form test oracles for the failing behavior of the system under analysis. In this paper, we propose a novel concept of spectrum driven test case purification for improving fault localization. The goal of test case purification is to separate existing test cases into small fractions (called purified test cases) and to enhance the test oracles to further localize faults. Combining with an original fault localization technique (e.g., Tarantula), test case purification results in better ranking the program statements. Our experiments on 1800 faults in six open-source Java programs show that test case purification can effectively improve existing fault localization techniques

    Learning to Combine Multiple Ranking Metrics for Fault Localization

    Get PDF
    International audienceFault localization is an inevitable step in software debugging. Spectrum-based fault localization consists in computing a ranking metric on execution traces to identify faulty source code. Existing empirical studies on fault localization show that there is no optimal ranking metric for all faults in practice. In this paper, we propose Multric, a learning-based approach to combining multiple ranking metrics for effective fault localization. In Multric, a suspiciousness score of a program entity is a combination of existing ranking metrics. Multric consists two major phases: learning and ranking. Based on training faults, Multric builds a ranking model by learning from pairs of faulty and non-faulty source code elements. When a new fault appears, Multric computes the final ranking with the learned model. Experiments are conducted on 5386 seeded faults in ten open-source Java programs. We empirically compare Multric against four widely-studied metrics and three recently-proposed one. Our experimental results show that Multric localizes faults more effectively than state-of-art metrics, such as Tarantula, Ochiai, and Ample

    Mutation-Based Graph Inference for Fault Localization

    Get PDF
    update for oadoi on Nov 02 2018International audienceWe present a new fault localization algorithm, called Vautrin, built on an approximation of causality based on call graphs. The approximation of causality is done using software mutants. The key idea is that if a mutant is killed by a test, certain call graph edges within a path between the mutation point and the failing test are likely causal. We evaluate our approach on the fault localization benchmark by Steimann et al. totaling 5,836 faults. The causal graphs are extracted from 88,732 nodes connected by 119,531 edges. Vautrin improves the fault localization effectiveness for all subjects of the benchmark. Considering the wasted effort at the method level, a classical fault localization evaluation metric, the improvement ranges from 3% to 55%, with an average improvement of 14%

    Ask the Mutants: Mutating Faulty Programs for Fault Localization

    Full text link

    Directed Test Program Generation for JIT Compiler Bug Localization

    Full text link
    Bug localization techniques for Just-in-Time (JIT) compilers are based on analyzing the execution behaviors of the target JIT compiler on a set of test programs generated for this purpose; characteristics of these test inputs can significantly impact the accuracy of bug localization. However, current approaches for automatic test program generation do not work well for bug localization in JIT compilers. This paper proposes a novel technique for automatic test program generation for JIT compiler bug localization that is based on two key insights: (1) the generated test programs should contain both passing inputs (which do not trigger the bug) and failing inputs (which trigger the bug); and (2) the passing inputs should be as similar as possible to the initial seed input, while the failing programs should be as different as possible from it. We use a structural analysis of the seed program to determine which parts of the code should be mutated for each of the passing and failing cases. Experiments using a prototype implementation indicate that test inputs generated using our approach result in significantly improved bug localization results than existing approaches
    corecore