9 research outputs found

    Using FindBugs on production software

    Full text link

    Evaluating static analysis defect warnings on production software

    Full text link

    Herodotos: A Tool to Expose Bugs' Lives

    Get PDF
    Software is continually evolving, to improve performance, correct errors, and add new features. Code modifications, however, inevitably lead to the introduction of defects. To prevent the introduction of defects, one has to understand why they occur. Thus, it is important to develop tools and practices that aid in defect finding, tracking and prevention. In this paper, we propose a methodology and associated tool, Herodotos, to study defects over time. Herodotos semi-automatically tracks defects over multiple versions of a software project, independent of other changes in the source files. It builds a graphical history of each defect and gives some statistics based on the results. We have evaluated this approach on the history of a representative range of open source projects over the last three years. For each project, we explore several kinds of defects that have been found by static code analysis. We analyze the generated results to compare the selected software projects and defect kinds

    Characterizing the evolution of statically-detectable performance issues of Android apps

    Get PDF
    Mobile apps are playing a major role in our everyday life, and they are tending to become more and more complex and resource demanding. Because of that, performance issues may occur, disrupting the user experience or, even worse, preventing an effective use of the app. Ultimately, such problems can cause bad reviews and influence the app success. Developers deal with performance issues thorough dynamic analysis, i.e., performance testing and profiler tools, albeit static analysis tools can be a valid, relatively inexpensive complement for the early detection of some such issues. This paper empirically investigates how potential performance issues identified by a popular static analysis tool — Android Lint — are actually resolved in 316 open source Android apps among 724 apps we analyzed. More specifically, the study traces the issues detected by Android Lint since their introduction until they resolved, with the aim of studying (i) the overall evolution of performance issues in apps, (ii) the proportion of issues being resolved, as well as (iii) the distribution of their survival time, and (iv) the extent to which issue resolution are documented by developers in commit messages. Results indicate how some issues, especially related to the lack of resource recycle, tend to be more frequent than others. Also, while some issues, primarily of algorithmic nature, tend to be resolved quickly through well-known patterns, others tend to stay in the app longer, or not to be resolved at all. Finally, we found how only 10% of the issue resolution is documented in commit messages

    WarningsGuru - Analysing historical commits, augmenting software bug prediction models with warnings and a user study

    Get PDF
    The detection of bugs in software is divided into two research fields. Static analysis report warnings at the line level and are often false positives. Statistical models use historical change measures to predict bugs in commits at a higher level. We developed a tool which combines both of these approaches. Our tool analyses each commit of a project and identifies which commit introduced a warning. It processed over 45k commits, more then previous research. We propose an augmented bug model which includes static analysis measures which found that a twofold increase in the number of new warnings increases the odds of introducing a bug 1.5 times. Overall, our model accounts for 22% of the deviance which is an improvement over the 19.5% baseline. We demonstrate that we can use simple measure to predict new security warnings with a deviance explained of 30% and that recent development experience and more co-developers reduces the number of security warnings by 8%. We perform a user study of developers who introduced new warnings in 37 projects. We found that 53% and 21% of warnings in Findbugs and Jlint respectively are useful. We analysed the time delta between the introduction and response of the developer to the notification of the warning. We hypothise that remembering the context of the change as an impact on the perceived usefulness given useful warnings had a median of 11.5 versus 23 days for non useful warning

    Tracking defect warnings across versions

    No full text

    ABSTRACT Tracking Defect Warnings Across Versions

    No full text
    Various static analysis tools will analyze a software artifact in order to identify potential defects, such as misused APIs, race conditions and deadlocks, and security vulnerabilities. For a number of reasons, it is important to be able to track the occurrence of each potential defect over multiple versions of a software artifact under study: in other words, to determine when warnings reported in multiple versions of the software all correspond the same underlying issue. One motivation for this capability is to remember decisions about code that has been reviewed and found to be safe despite the occurrence of a warning. Another motivation is constructing warning deltas between versions, showing which warnings are new, which have persisted, and which have disappeared. This allows reviewers to focus their efforts on inspecting new warnings. Finally, tracking warnings through a series of software versions reveals where potential defects are introduced and fixed, and how long they persist, exposing interesting trends and patterns. We will discuss two different techniques we have implemented in FindBugs (a static analysis tool to find bugs in Java programs) for tracking defects across versions, discuss their relative merits and how they can be incorporated into the software development process, and discuss the results of tracking defect warnings across Sun’s Java runtime library
    corecore