19 research outputs found

    Identifying Patch Correctness in Test-Based Program Repair

    Full text link
    Test-based automatic program repair has attracted a lot of attention in recent years. However, the test suites in practice are often too weak to guarantee correctness and existing approaches often generate a large number of incorrect patches. To reduce the number of incorrect patches generated, we propose a novel approach that heuristically determines the correctness of the generated patches. The core idea is to exploit the behavior similarity of test case executions. The passing tests on original and patched programs are likely to behave similarly while the failing tests on original and patched programs are likely to behave differently. Also, if two tests exhibit similar runtime behavior, the two tests are likely to have the same test results. Based on these observations, we generate new test inputs to enhance the test suites and use their behavior similarity to determine patch correctness. Our approach is evaluated on a dataset consisting of 139 patches generated from existing program repair systems including jGenProg, Nopol, jKali, ACS and HDRepair. Our approach successfully prevented 56.3\% of the incorrect patches to be generated, without blocking any correct patches.Comment: ICSE 201

    A kernel density estimate-based approach to component goodness modeling

    Get PDF
    Intermittent fault localization approaches account for the fact that faulty components may fail intermittently by considering a parameter (known as goodness) that quantifies the probability that faulty components may still exhibit correct behavior. Current, state-of-the-art approaches (1) assume that this goodness probability is context independent and (2) do not provide means for integrating past diagnosis experience in the diagnostic mechanism. In this paper, we present a novel approach, coined Non-linear Feedback-based Goodness Estimate (NFGE), that uses kernel density estimations (KDE) to address such limitations. We evaluated the approach with both synthetic and real data, yielding lower estimation errors, thus increasing the diagnosis performance

    Application of social networking algorithms in program analysis: understanding execution frequencies

    Get PDF
    2011 Summer.Includes bibliographical references.There may be some parts of a program that are more commonly used at runtime, whereas there may be other parts that are less commonly used or not used at all. In this exploratory study, we propose an approach to predict how frequently or rarely different parts of a program will get used at runtime without actually running the program. Knowledge of the most frequently executed parts can help identify the most critical and the most testable parts of a program. The portions predicted to be the less commonly executed tend to be hard to test parts of a program. Knowing the hard to test parts of a program can aid the early development of test cases. In our approach we statically analyse code or static models of code (like UML class diagrams), using quantified social networking measures and web structure mining measures. These measures assign ranks to different portions of code for use in predictions of the relative frequency that a section of code will be used. We validated these rank ordering of predictions by running the program with a common set of use cases and identifying the actual rank ordering. We compared the predictions with other measures that use direct coupling or lines of code. We found that our predictions fared better as they were statistically more correlated to the actual rank ordering than the other measures. We present a prototype tool written as an eclipse plugin, that implements and validates our approach. Given the source code of a Java program, our tool computes the values of the metrics required by our approach to present ranks of all classes in order of how frequently they are expected to get used. Our tool can also instrument the source code to log all the necessary information at runtime that is required to validate our predictions

    Design and Implementation of a Tracer Driver: Easy and Efficient Dynamic Analyses of Constraint Logic Programs

    Full text link
    Tracers provide users with useful information about program executions. In this article, we propose a ``tracer driver''. From a single tracer, it provides a powerful front-end enabling multiple dynamic analysis tools to be easily implemented, while limiting the overhead of the trace generation. The relevant execution events are specified by flexible event patterns and a large variety of trace data can be given either systematically or ``on demand''. The proposed tracer driver has been designed in the context of constraint logic programming; experiments have been made within GNU-Prolog. Execution views provided by existing tools have been easily emulated with a negligible overhead. Experimental measures show that the flexibility and power of the described architecture lead to good performance. The tracer driver overhead is inversely proportional to the average time between two traced events. Whereas the principles of the tracer driver are independent of the traced programming language, it is best suited for high-level languages, such as constraint logic programming, where each traced execution event encompasses numerous low-level execution steps. Furthermore, constraint logic programming is especially hard to debug. The current environments do not provide all the useful dynamic analysis tools. They can significantly benefit from our tracer driver which enables dynamic analyses to be integrated at a very low cost.Comment: To appear in Theory and Practice of Logic Programming (TPLP), Cambridge University Press. 30 pages

    Spectrum-Based Fault Localization for Diagnosing Concurrency Faults

    Full text link
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.Concurrency faults are activated by specific thread interleavings at runtime. Traditional fault localization techniques and static analysis fall short to diagnose these faults efficiently. Existing dynamic fault-localization techniques focus on pinpointing data-access patterns that are subject to concurrency faults. In this paper, we propose a spectrum-based fault localization technique for localizing faulty code blocks instead. We systematically instrument the program to create versions that run in particular combinations of thread interleavings. We run tests on all these versions and utilize spectrum-based fault localization to correlate detected errors with concurrently executing code blocks. We have implemented a tool and applied our approach on several industrial case studies. Case studies show that our approach can effectively and efficiently localize concurrency faults
    corecore