4 research outputs found

    Studying and Leveraging User-Provided Logs in Bug Reports for Debugging Assistance

    Get PDF
    Bug reports provide important information for developers to debug user-reported issues. During the debugging process, developers need to study the bug report and examine user-provided logs to understand the system execution paths that lead to the problem. Prior studies on bug reports also found that such user-provided often contain valuable debugging information to developers. In this thesis, we conduct a tool-assisted study to study user-provided logs in bug reports. Our goal is to study any challenges that developers may encounter when analyzing the logs, and how many additional buggy classes can the logs help to identify. In particular, we study both system-generated logs and exception stack traces. Our approach tries to simulate developers' debugging process by 1) identifying the location in the source code where the logs were generated, 2) re-constructing execution paths by finding the call paths are can be traced back from the logs, and 3) studying the additional buggy classes that the re-constructed execution paths identify. We conduct our study on eight large-scale open-source systems with a total of 1,145 bug reports that contain logs. We find that the execution paths cannot be constructed in 32% of the studied bug reports, since many logs can no longer be found in the source code due to code evolution, and users often provide logs that are generated by third-party frameworks. In the rest of the cases, the re-constructed execution paths can identify 15% additional buggy classes in 41% of the bug reports. Through a comprehensive manual study, we find that the main reasons that the re-constructed execution paths fail to identify additional buggy classes are that reporters often only attach logs that describe the unexpected behavior (e.g., stack traces) without additional logs to illustrate the system execution. In summary, this thesis highlights both the challenges and potentials of using user-provided logs to assist developers with debugging. It also revealed common issues with user-provided logs in bug reports, and provided suggestions for future research

    Version history, similar report, and structure: Putting them together for improved bug localization

    Get PDF
    During the evolution of a software system, a large number of bug reports are submitted. Locating the source code files that need to be fixed to resolve the bugs is a challenging problem. Thus, there is a need for a technique that can automatically figure out these buggy files. A number of bug localization solutions that take in a bug report and output a ranked list of files sorted based on their likelihood to be buggy have been proposed in the literature. However, the accuracy of these tools still need to be improved. In this paper, to address this need, we propose AmaLgam, a new method for locating relevant buggy files that puts together version history, similar reports, and structure. To do this, AmaLgam integrates a bug prediction technique used in Google which analyzes version history, with a bug localization technique named BugLocator which analyzes similar reports from bug report system, and the state-of-the-art bug localization technique BLUiR which considers structure. We perform a large-scale experiment on four open source projects, namely AspectJ, Eclipse, SWT and ZXing to localize more than 3,000 bugs. Compared with a history-aware bug localization solution of Sisman and Kak, our approach achieves a 46.1 % improvement in terms of mean average precision (MAP). Compared with BugLocator, our approach achieves a 24.4 % improvement in terms of MAP. Compared with BLUiR, our approach achieves a 16.4% improvement in terms of MAP
    corecore