4,354 research outputs found

    Automated Failure Explanation Through Execution Comparison

    Get PDF
    When fixing a bug in software, developers must build an understanding or explanation of the bug and how the bug flows through a program. The effort that developers must put into building this explanation is costly and laborious. Thus, developers need tools that can assist them in explaining the behavior of bugs. Dynamic slicing is one technique that can effectively show how a bug propagates through an execution up to the point where a program fails. However, dynamic slices are large because they do not just explain the bug itself; they include extra information that explains any observed behavior that might be connected to the bug. Thus, the explanation of the bug is hidden within this other tangentially related information. This dissertation addresses the problem and shows how a failing execution and a correct execution may be compared in order to construct explanations that include only information about what caused the bug. As a result, these automated explanations are significantly more concise than those explanations produced by existing dynamic slicing techniques. To enable the comparison of executions, we develop new techniques for dynamic analyses that identify the commonalities and differences between executions. First, we devise and implement the notion of a point within an execution that may exist across multiple executions. We also note that comparing executions involves comparing the state or variables and their values that exist within the executions at different execution points. Thus, we design an approach for identifying the locations of variables in different executions so that their values may be compared. Leveraging these tools, we design a system for identifying the behaviors within an execution that can be blamed for a bug and that together compose an explanation for the bug. These explanations are up to two orders of magnitude smaller than those produced by existing state of the art techniques. We also examine how different choices of a correct execution for comparison can impact the practicality or potential quality of the explanations produced via our system

    Understanding object-oriented source code from the behavioural perspective

    Get PDF
    Comprehension is a key activity that underpins a variety of software maintenance and engineering tasks. The task of understanding object-oriented systems is hampered by the fact that the code segments that are related to a user-level function tend to be distributed across the system. We introduce a tool-supported code extraction technique that addresses this issue. Given a minimal amount of information about a behavioural element of the system that is of interest (such as a use-case), it extracts a trail of the methods (and method invocations) through the system that are needed in order to achieve an understanding of the implementation of the element of interest. We demonstrate the feasibility of our approach by implementing it as part of a code extraction tool, presenting a case study and evaluating the approach and tool against a set of established criteria for program comprehension tools
    corecore