706 research outputs found

    Dynamic Slicing by On-demand Re-execution

    Full text link
    In this paper, we propose a novel approach that aims to offer an alternative to the prevalent paradigm to dynamic slicing construction. Dynamic slicing requires dynamic data and control dependencies that arise in an execution. During a single execution, memory reference information is recorded and then traversed to extract dependencies. Execute-once approaches and tools are challenged even by executions of moderate size of simple and short programs. We propose to shift practical time complexity from execution size to slice size. In particular, our approach executes the program multiple times while tracking targeted information at each execution. We present a concrete algorithm that follows an on-demand re-execution paradigm that uses a novel concept of frontier dependency to incrementally build a dynamic slice. To focus dependency tracking, the algorithm relies on static analysis. We show results of an evaluation on the SV-COMP benchmark and Antrl4 unit tests that provide evidence that on-demand re-execution can provide performance gains particularly when slice size is small and execution size is large

    Replaying and Isolating Failure-Inducing Program Interactions

    Get PDF
    A program fails. What now? A programmer must debug the program to fix the problem, by completing two fundamental debugging tasks: first, the programmer has to reproduce the failure; second, she has to find the failure cause. Both tasks can result in a tedious, long-lasting, and boring work on the one hand, and can be a factor that significantly drives up costs and risks on the other hand. The field of automated debugging aims to ease the search for failure causes. This work presents JINSI, taking a new twist on automated debugging that aims to combine ease of use with unprecedented effectiveness. Taking a single failing run, we automatically record and minimize the interactions between objects to the set of calls relevant for the failure. The result is a minimal unit test that faithfully reproduces the failure at will: "Out of these 14,628 calls, only 2 are required". In a study of 17 real-life bugs, JINSI reduced the search space to 13.7 % of the dynamic slice or 0.22 % of the source code, with only 1 to 12 calls left to examine—a precision not only significantly above the state of the art, but also at a level at which fault localization ceases to be a problem. Moreover, by combining delta debugging, event slicing, and dynamic slicing, we are able to automatically compute failure reproductions along cause-effect chains eventually leading to the defect—both efficiently and effectively. JINSI provides minimal unit tests with related calls which pinpoints the circumstances under which the failure occurs at different abstraction levels. In that way, the approach discussed in this thesis ensures a high diagnostic quality and enables the programmer to concentrate on automatically selected parts of the program relevant to the failure.Ein Programm liefert ein falsches Ergebnis. Was nun? Ein Entwickler muss das Programm, mit dem Ziel den eigentlichen Defekt im Programmcode zu beheben, debuggen. Hierzu sind zwei fundamentale Schritte erforderlich: der Entwickler muss den Fehler zunächst reproduzieren, und er muss schließlich die genaue Fehlerursache finden. Diese Aufgabenstellung kann einerseits in einer mühsamen, langwierigen und auch langweiligen Suche münden, anderseits kann sie einen Faktor darstellen, der die Kosten der Entwicklung erheblich in die Höhe treibt und die damit verbundenen Risiken schwer kalkulierbar macht. Ziel des Gebietes der Automatischen Fehlersuche ist es, jene Suche nach den genauen Ursachen von Programmfehlern zu vereinfachen. Die vorliegende Dissertation beschreibt JINSI, ein neuartiges Verfahren zur automatischen Fehlersuche, welches Anwenderfreundlichkeit mit beispielloser Leistungsfähigkeit vereint. Anhand eines einzelnen fehlschlagenden Programmlaufes zeichnen wir Objektinteraktionen auf und vereinfachen diese, bis lediglich jene Interaktionen verbleiben, die für den Fehler relevant sind. Das Ergebnis ist ein minimaler Unittest, der den Fehler originalgetreu und nach Belieben reproduzieren kann: "Von den 14.628 Methodenaufrufen werden nur 2 benötigt." In einer Untersuchung von 17 Fehlern aus der Praxis reduzierte JINSI den Suchraum auf 13,7% des dynamischen Slices bzw. auf 0,22% des Quelltextes. Dabei blieben 1 bis 12 Interaktionen übrig, die zum Auffinden der Fehlerursache untersucht werden müssen. Diese Präzision übertrifft nicht nur den bis dato letzten Stand der Forschung, sondern ist auch auf einem Niveau, auf dem das Problem der Fehlerursachenbestimmung aufhört zu bestehen. Durch die Verknüpfung von Delta Debugging, Event Slicing und Dynamischem Slicing sind wir in der Lage Fehlerreproduktionen vollautomatisch entlang von Ursache-Wirkungsketten zu bestimmen — vom Symptom bis zur Ursache des Fehlers. JINSI bestimmt Unittests mit zusammenhängenden Interaktionen die genau festlegen, unter welchen Umständen ein Fehler auf verschiedenen Abstraktionsebenen innerhalb des Programmes auftritt. Auf diese Weise erreicht das in dieser Arbeit diskutierte Verfahren eine hohe Diagnosequalität und hilft dem Programmierer sich auf die Teile des Programmes zu konzentrieren, die für das Fehlschlagen relevant sind

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Break the dead end of dynamic slicing: localizing data and control omission bug

    Get PDF
    Dynamic slicing is a common way of identifying the root cause when a program fault is revealed. With the dynamic slicing technique, the programmers can follow data and control flow along the program execution trace to the root cause. However, the technique usually fails to work on omission bugs, i.e., the faults which are caused by missing executing some code. In many cases, dynamic slicing over-skips the root cause when an omission bug happens, leading the debugging process to a dead end. In this work, we conduct an empirical study on the omission bugs in the Defects4J bug repository. Our study shows that (1) omission bugs are prevalent (46.4%) among all the studied bugs; (2) there are repeating patterns on causes and fixes of the omission bugs; (3) the patterns of fixing omission bugs serve as a strong hint to break the slicing dead end. Based on our findings, we train a neural network model on the omission bugs in Defects4J repository to recommend where to approach when slicing can no long work. We conduct an experiment by applying our approach on 3193 mutated omission bugs which slicing fails to locate. The results show that our approach outperforms random benchmark on breaking the dead end and localizing the mutated omission bugs (63.8% over 2.8%).No Full Tex

    Capture-based Automated Test Input Generation

    Get PDF
    Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge. To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following. First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input. Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage. Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage. Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes. In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs

    Semantic Analyses to Detect and Localize Software Regression Errors

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore