168 research outputs found

    On Oracles for Automated Diagnosis and Repair of Software Bugs

    Get PDF
    This HDR focuses on my work on automatic diagnosis and repair done over the past years. Among my past publications, it highlights three contributions on this topic, respectively published in ACM Transactions on Software Engineering and Methodology (TOSEM), IEEE Transactions on Software Engineering (TSE) and Elsevier Information & Software Technology (IST). My goal is to show that those three contributions share something deep, that they are founded on a unifying concept, which is the one of oracle. The first contribution is about statistical oracles. In the context of object-oriented software, we have defined a notion of context and normality that is specific to a fault class: missing method calls. Those inferred regularities act as oracle and their violations are considered as bugs. The second contribution is about test case based oracles for automatic repair. We describe an automatic repair system that fixes failing test cases by generating a patch. It is founded on the idea of refining the knowledge given by the violation of the oracle of the failing test case into finer-grain information, which we call a “micro-oracle”. By considering micro-oracles, we are capable of obtaining at the same time a precise fault localization diagnostic and a well-formed input-output specification to be used for program synthesis in order to repair a bug. The third contribution discusses a novel generic oracle in the context of exception handling. A generic oracle states properties that hold for many domains. Our technique verifies the compliance to this new oracle using test suite execution and exception injection. This document concludes with a research agenda about the future of engineering ultra-dependable and antifragile software systems

    Automatic Software Repair: a Bibliography

    Get PDF
    This article presents a survey on automatic software repair. Automatic software repair consists of automatically finding a solution to software bugs without human intervention. This article considers all kinds of repairs. First, it discusses behavioral repair where test suites, contracts, models, and crashing inputs are taken as oracle. Second, it discusses state repair, also known as runtime repair or runtime recovery, with techniques such as checkpoint and restart, reconfiguration, and invariant restoration. The uniqueness of this article is that it spans the research communities that contribute to this body of knowledge: software engineering, dependability, operating systems, programming languages, and security. It provides a novel and structured overview of the diversity of bug oracles and repair operators used in the literature

    A Critical Review of "Automatic Patch Generation Learned from Human-Written Patches": Essay on the Problem Statement and the Evaluation of Automatic Software Repair

    Get PDF
    At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.Comment: ICSE 2014, India (2014

    Test Case Purification for Improving Fault Localization

    Get PDF
    Finding and fixing bugs are time-consuming activities in software development. Spectrum-based fault localization aims to identify the faulty position in source code based on the execution trace of test cases. Failing test cases and their assertions form test oracles for the failing behavior of the system under analysis. In this paper, we propose a novel concept of spectrum driven test case purification for improving fault localization. The goal of test case purification is to separate existing test cases into small fractions (called purified test cases) and to enhance the test oracles to further localize faults. Combining with an original fault localization technique (e.g., Tarantula), test case purification results in better ranking the program statements. Our experiments on 1800 faults in six open-source Java programs show that test case purification can effectively improve existing fault localization techniques

    Tests4Py: A Benchmark for System Testing

    Full text link
    Benchmarks are among the main drivers of progress in software engineering research, especially in software testing and debugging. However, current benchmarks in this field could be better suited for specific research tasks, as they rely on weak system oracles like crash detection, come with few unit tests only, need more elaborative research, or cannot verify the outcome of system tests. Our Tests4Py benchmark addresses these issues. It is derived from the popular BugsInPy benchmark, including 30 bugs from 5 real-world Python applications. Each subject in Tests4Py comes with an oracle to verify the functional correctness of system inputs. Besides, it enables the generation of system tests and unit tests, allowing for qualitative studies by investigating essential aspects of test sets and extensive evaluations. These opportunities make Tests4Py a next-generation benchmark for research in test generation, debugging, and automatic program repair.Comment: 5 pages, 4 figure

    Improving the Correctness of Automated Program Repair

    Get PDF
    Developers spend much of their time fixing bugs in software programs. Automated program repair (APR) techniques aim to alleviate the burden of bug fixing from developers by generating patches at the source-code level. Recently, Generate-and-Validate (G&V) APR techniques show great potential to repair general bugs in real-world applications. Recent evaluations show that G&V techniques repair 8–17.7% of the collected bugs from mature Java or C open-source projects. Despite the promising results, G&V techniques may generate many incorrect patches and are not able to repair every single bug. This thesis makes contributions to improve the correctness of APR by improving the quality assurance of the automatically-generated patches and generating more correct patches by leveraging human knowledge. First, this thesis investigates whether improving the test-suite-based validation can precisely identify incorrect patches that are generated by G&V, and whether it can help G&V generate more correct patches. The result of this investigation, Opad, which combines new fuzz-generated test cases and additional oracles (i.e., memory oracles), is proposed to identify incorrect patches and help G&V repair more bugs correctly. The evaluation of Opad shows that the improved test-suite-based validation identifies 75.2% incorrect patches from G&V techniques. With the integration of Opad, SPR, one of the most promising G&V techniques, repairs one additional bug. Second, this thesis proposes novel APR techniques to repair more bugs correctly, by leveraging human knowledge. Thus, APR techniques can repair new types of bugs that are not currently targeted by G&V APR techniques. Human knowledge in bug-fixing activities is noted in the forms such as commits of bug fixes, developers’ expertise, and documentation pages. Two techniques (APARE and Priv) are proposed to target two types of defects respectively: project-specific recurring bugs and vulnerability warnings by static analysis. APARE automatically learns fix patterns from historical bug fixes (i.e., originally crafted by developers), utilizes spectrum-based fault-localization technique to identify highly-likely faulty methods, and applies the learned fix patterns to generate patches for developers to review. The key innovation of APARE is to utilize a percentage semantic-aware matching algorithm between fix patterns and faulty locations. For the 20 recurring bugs, APARE generates 34 method fixes, 24 of which (70.6%) are correct; 83.3% (20 out of 24) are identical to the fixes generated by developers. In addition, APARE complements current repair systems by generating 20 high-quality method fixes that RSRepair and PAR cannot generate. Priv is a multi-stage remediation system specifically designed for static-analysis security-testing (SAST) techniques. The prototype is built and evaluated on a commercial SAST product. The first stage of Priv is to prioritize workloads of fixing vulnerability warnings based on shared fix locations. The likely fix locations are suggested based on a set of rules. The rules are concluded and developed through the collaboration with two security experts. The second stage of Priv provides additional essential information for improving the efficiency of diagnosis and fixing. Priv offers two types of additional information: identifying true database/attribute-related warnings, and providing customized fix suggestions per warning. The evaluation shows that Priv suggests identical fix locations to the ones suggested by developers for 50–100% of the evaluated vulnerability findings. Priv identifies up to 2170 actionable vulnerability findings for the evaluated six projects. The manual examination confirms that Priv can generate patches of high-quality for many of the evaluated vulnerability warnings

    Leveraging input features for testing and debugging

    Get PDF
    When debugging software, it is paramount to understand why the program failed in the first place. How can we aid the developer in this process, and automate the process of understanding a program failure? The behavior of a program is determined by its inputs. A common step in debugging is to determine how the inputs influences the program behavior. In this thesis, I present Alhazen, a tool which combines a decision tree and a test generator into a feedback loop to create an explanation for which inputs lead to a specific program behavior. This can be used to generate a diagnosis for a bug automatically. Alhazen is evaluated for generating additional inputs, and recognizing bug-triggering inputs. The thesis makes first advances towards a user study on whether Alhazen helps software developers in debugging. Results show that Alhazen is effective for both use cases, and indicate that machine-learning approaches can help to understand program behavior. Considering Alhazen, the question what else can be done with input features follows naturally. In the thesis I present Basilisk, a tool which focuses testing on a specific part of the input. Basilisk is evaluated on its own, and an integration with Alhazen is explored. The results show promise, but also some challenges. All in all, the thesis shows that there is some potential in considering input features.Bei der Fehlersuche in Softwaresystemen ist es wichtig, die Ursache des Fehlers zu verstehen. Wie können wir den Entwickler hierbei unterstützen, und den Prozess des Programmverstehens automatisieren? Das Verhalten eines Programms wird durch seinen Input bestimmt. Ein typischer Schritt bei der Fehlersuche ist daher, zu verstehen wie der Input das Programmverhalten beeinflust. In dieser Arbeit präsentiere ich Alhazen, ein Programm, das eine Feedback Loop zwischen einem Decision Tree und einen Testgenerator einrichtet, um eine Erklärung dafür zu generieren, welche Inputs ein bestimmtes Programmverhalten auslösen. Das kann benutzt werden, um einen Programmfehler automatisch zu diagnostizieren. Es wird evaluiert, ob Alhazen eingesetzt werden kann um zusätzliche Inputs zu generieren und Inputs, die den Fehler auslösen, zu erkennen. Die Arbeit enhält erste Schritte für eine Nutzerstudie, die zeigen soll, ob Alhazen Softwareentwicklern bei der Fehlersuche hilft. Die Ergebnisse zeigen, dass Alhazen in beiden Szenarien effektiv ist, und deuten darauf hin, dass maschinelle Lernverfahren beim Programmverstehen helfen können. Alhazen wirft die Frage auf, ob die Betrachtung von Input Features andere Möglichkeiten bietet. Diese Thesis präsentiert Basilisk, ein Werkzeug, das Testgenerierung auf bestimmte Teile des Inputs fokussiert. Basilisk wird als eigenständiges Tool evaluiert, und eine Integration mit Alhazen wird ausprobiert. Die Ergebnisse sind vielversprechend, zeigen aber auch Probleme mit der Idee auf. Insgesamt zeigt die Arbeit das Protential von Input Features

    Improving the Correctness of Automated Program Repair

    Get PDF
    Developers spend much of their time fixing bugs in software programs. Automated program repair (APR) techniques aim to alleviate the burden of bug fixing from developers by generating patches at the source-code level. Recently, Generate-and-Validate (G&V) APR techniques show great potential to repair general bugs in real-world applications. Recent evaluations show that G&V techniques repair 8–17.7% of the collected bugs from mature Java or C open-source projects. Despite the promising results, G&V techniques may generate many incorrect patches and are not able to repair every single bug. This thesis makes contributions to improve the correctness of APR by improving the quality assurance of the automatically-generated patches and generating more correct patches by leveraging human knowledge. First, this thesis investigates whether improving the test-suite-based validation can precisely identify incorrect patches that are generated by G&V, and whether it can help G&V generate more correct patches. The result of this investigation, Opad, which combines new fuzz-generated test cases and additional oracles (i.e., memory oracles), is proposed to identify incorrect patches and help G&V repair more bugs correctly. The evaluation of Opad shows that the improved test-suite-based validation identifies 75.2% incorrect patches from G&V techniques. With the integration of Opad, SPR, one of the most promising G&V techniques, repairs one additional bug. Second, this thesis proposes novel APR techniques to repair more bugs correctly, by leveraging human knowledge. Thus, APR techniques can repair new types of bugs that are not currently targeted by G&V APR techniques. Human knowledge in bug-fixing activities is noted in the forms such as commits of bug fixes, developers’ expertise, and documentation pages. Two techniques (APARE and Priv) are proposed to target two types of defects respectively: project-specific recurring bugs and vulnerability warnings by static analysis. APARE automatically learns fix patterns from historical bug fixes (i.e., originally crafted by developers), utilizes spectrum-based fault-localization technique to identify highly-likely faulty methods, and applies the learned fix patterns to generate patches for developers to review. The key innovation of APARE is to utilize a percentage semantic-aware matching algorithm between fix patterns and faulty locations. For the 20 recurring bugs, APARE generates 34 method fixes, 24 of which (70.6%) are correct; 83.3% (20 out of 24) are identical to the fixes generated by developers. In addition, APARE complements current repair systems by generating 20 high-quality method fixes that RSRepair and PAR cannot generate. Priv is a multi-stage remediation system specifically designed for static-analysis security-testing (SAST) techniques. The prototype is built and evaluated on a commercial SAST product. The first stage of Priv is to prioritize workloads of fixing vulnerability warnings based on shared fix locations. The likely fix locations are suggested based on a set of rules. The rules are concluded and developed through the collaboration with two security experts. The second stage of Priv provides additional essential information for improving the efficiency of diagnosis and fixing. Priv offers two types of additional information: identifying true database/attribute-related warnings, and providing customized fix suggestions per warning. The evaluation shows that Priv suggests identical fix locations to the ones suggested by developers for 50–100% of the evaluated vulnerability findings. Priv identifies up to 2170 actionable vulnerability findings for the evaluated six projects. The manual examination confirms that Priv can generate patches of high-quality for many of the evaluated vulnerability warnings
    • …
    corecore