112 research outputs found

    Automatic Repair of Buggy If Conditions and Missing Preconditions with SMT

    Get PDF
    We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.Comment: CSTVA'2014, India (2014

    Nopol: Automatic Repair of Conditional Statement Bugs in Java Programs

    Get PDF
    International audienceWe propose NOPOL, an approach to automatic repair of buggy conditional statements (i.e., if-then-else statements). This approach takes a buggy program as well as a test suite as input and generates a patch with a conditional expression as output. The test suite is required to contain passing test cases to model the expected behavior of the program and at least one failing test case that reveals the bug to be repaired. The process of NOPOL consists of three major phases. First, NOPOL employs angelic fix localization to identify expected values of a condition during the test execution. Second, runtime trace collection is used to collect variables and their actual values, including primitive data types and objected-oriented features (e.g., nullness checks), to serve as building blocks for patch generation. Third, NOPOL encodes these collected data into an instance of a Satisfiability Modulo Theory (SMT) problem; then a feasible solution to the SMT instance is translated back into a code patch. We evaluate NOPOL on 22 real-world bugs (16 bugs with buggy IF conditions and 6 bugs with missing preconditions) on two large open-source projects, namely Apache Commons Math and Apache Commons Lang. Empirical analysis on these bugs shows that our approach can effectively fix bugs with buggy IF conditions and missing preconditions. We illustrate the capabilities and limitations of NOPOL using case studies of real bug fixes

    On Oracles for Automated Diagnosis and Repair of Software Bugs

    Get PDF
    This HDR focuses on my work on automatic diagnosis and repair done over the past years. Among my past publications, it highlights three contributions on this topic, respectively published in ACM Transactions on Software Engineering and Methodology (TOSEM), IEEE Transactions on Software Engineering (TSE) and Elsevier Information & Software Technology (IST). My goal is to show that those three contributions share something deep, that they are founded on a unifying concept, which is the one of oracle. The first contribution is about statistical oracles. In the context of object-oriented software, we have defined a notion of context and normality that is specific to a fault class: missing method calls. Those inferred regularities act as oracle and their violations are considered as bugs. The second contribution is about test case based oracles for automatic repair. We describe an automatic repair system that fixes failing test cases by generating a patch. It is founded on the idea of refining the knowledge given by the violation of the oracle of the failing test case into finer-grain information, which we call a “micro-oracle”. By considering micro-oracles, we are capable of obtaining at the same time a precise fault localization diagnostic and a well-formed input-output specification to be used for program synthesis in order to repair a bug. The third contribution discusses a novel generic oracle in the context of exception handling. A generic oracle states properties that hold for many domains. Our technique verifies the compliance to this new oracle using test suite execution and exception injection. This document concludes with a research agenda about the future of engineering ultra-dependable and antifragile software systems

    Automatic Repair of Real Bugs: An Experience Report on the Defects4J Dataset

    Full text link
    Defects4J is a large, peer-reviewed, structured dataset of real-world Java bugs. Each bug in Defects4J is provided with a test suite and at least one failing test case that triggers the bug. In this paper, we report on an experiment to explore the effectiveness of automatic repair on Defects4J. The result of our experiment shows that 47 bugs of the Defects4J dataset can be automatically repaired by state-of- the-art repair. This sets a baseline for future research on automatic repair for Java. We have manually analyzed 84 different patches to assess their real correctness. In total, 9 real Java bugs can be correctly fixed with test-suite based repair. This analysis shows that test-suite based repair suffers from under-specified bugs, for which trivial and incorrect patches still pass the test suite. With respect to practical applicability, it takes in average 14.8 minutes to find a patch. The experiment was done on a scientific grid, totaling 17.6 days of computation time. All their systems and experimental results are publicly available on Github in order to facilitate future research on automatic repair

    Automatic Repair of Infinite Loops

    Full text link
    Research on automatic software repair is concerned with the development of systems that automatically detect and repair bugs. One well-known class of bugs is the infinite loop. Every computer programmer or user has, at least once, experienced this type of bug. We state the problem of repairing infinite loops in the context of test-suite based software repair: given a test suite with at least one failing test, generate a patch that makes all test cases pass. Consequently, repairing infinites loop means having at least one test case that hangs by triggering the infinite loop. Our system to automatically repair infinite loops is called InfinitelInfinitel. We develop a technique to manipulate loops so that one can dynamically analyze the number of iterations of loops; decide to interrupt the loop execution; and dynamically examine the state of the loop on a per-iteration basis. Then, in order to synthesize a new loop condition, we encode this set of program states as a code synthesis problem using a technique based on Satisfiability Modulo Theory (SMT). We evaluate our technique on seven seeded-bugs and on seven real-bugs. InfinitelInfinitel is able to repair all of them, within seconds up to one hour on a standard laptop configuration

    Tortoise: Interactive System Configuration Repair

    Full text link
    System configuration languages provide powerful abstractions that simplify managing large-scale, networked systems. Thousands of organizations now use configuration languages, such as Puppet. However, specifications written in configuration languages can have bugs and the shell remains the simplest way to debug a misconfigured system. Unfortunately, it is unsafe to use the shell to fix problems when a system configuration language is in use: a fix applied from the shell may cause the system to drift from the state specified by the configuration language. Thus, despite their advantages, configuration languages force system administrators to give up the simplicity and familiarity of the shell. This paper presents a synthesis-based technique that allows administrators to use configuration languages and the shell in harmony. Administrators can fix errors using the shell and the technique automatically repairs the higher-level specification written in the configuration language. The approach (1) produces repairs that are consistent with the fix made using the shell; (2) produces repairs that are maintainable by minimizing edits made to the original specification; (3) ranks and presents multiple repairs when relevant; and (4) supports all shells the administrator may wish to use. We implement our technique for Puppet, a widely used system configuration language, and evaluate it on a suite of benchmarks under 42 repair scenarios. The top-ranked repair is selected by humans 76% of the time and the human-equivalent repair is ranked 1.31 on average.Comment: Published version in proceedings of IEEE/ACM International Conference on Automated Software Engineering (ASE) 201
    • …
    corecore