5,345 research outputs found

    Automatically fixing static analysis tools violations

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.A qualidade de software tem se tornado cada vez mais importante à medida que a so- ciedade depende mais de sistemas de software. Defeitos de software podem custar caro à organizações, especialmente quando causam falhas. Ferramentas de análise estática analisam código para encontrar desvios, ou violações, de práticas recomendadas de pro- gramação definidas como regras. Essa análise pode encontrar defeitos de software de forma antecipada, mais rápida e barata, em contraste à inspeções manuais. Para corrigir- se uma violação é necessário que o programador modifique o código problemático. Essas modificações podem ser tediosas, passíveis de erro e repetitivas. Dessa forma, a au- tomação de transformações de código é uma funcionalidade frequentemente requisitada por desenvolvedores. Esse trabalho implementa transformações automáticas para resolver violações identificadas por ferramentas de análise estática. Primeiro, nós investigamos o uso da ferramenta SonarQube, uma ferramenta amplamente utilizada, em duas grandes organizações open-source e duas instituições do Governo Federal do Brasil. Nossos re- sultados mostram que um pequeno subconjunto de regras é responsável por uma grande porção das violações resolvidas. Nós implementamos transformações automáticas para 11 regras do conjunto de regras comumente resolvidas achadas no estudo anterior. Nós submetemos 38 pull requests, incluindo 920 soluções para violações, geradas automati- camente pela nossa técnica para diversos projetos open-source na linguagem Java. Os mantenedores dos projetos aceitaram 84% das nossas transformações, sendo 95% delas sem nenhuma modificação. Esses resultados indicam que nossa abordagem é prática, e pode auxiliar desenvolvedores com resoluções automáticas, uma funcionalidade frequente- mente requisitada.Software quality is becoming more important as the reliance on software systems in- creases. Software defects may have a high cost to organizations as some can lead to software failure. Static analysis tools analyze code to find deviations, or violations, from recommended programming practices defined as rules. This analysis can find software defects earlier, faster, and cheaper than manual inspections. When fixing a violation, a programmer is required to modify the violating code. Such modifications can be tedious, error-prone, and repetitive. Unsurprisingly, automated transformations are frequently re- quested by developers. This work implements automatic transformations tailored to solve violations identified by static analysis tools. First, we investigate the use of SonarQube, a widely used Static Analysis Tool, in two large open source organizations and two Brazil- ian Government Federal Institutions. Our results show that a small subset of the rules is responsible for a large portion of the fixes. We implement automatic fixes for 11 rules from the previously found set of frequently fixed rules. We submitted 38 pull requests, including 920 fixes generated automatically by our technique for various open-source Java projects. Projects maintainers accepted 84% of our fixes (95% of them without any mod- ifications). These results indicate that our approach is feasible, and can aid developers with automatic fixes, a long requested feature

    Unleashing the Power of Clippy in Real-World Rust Projects

    Full text link
    Clippy lints are considered as essential tools for Rust developers, as they can be configured as gate-keeping rules for a Rust project during continuous integration. Despite their availability, little was known about practical application and cost-effectiveness of the lints in reducing code quality issues. In this study, we embark on a comprehensive analysis to unveil the true impact of Clippy lints in the Rust development landscape. The study is structured around three interrelated components, each contributing to the overall effectiveness of Clippy. Firstly, we conduct a comprehensive analysis of Clippy lints in all idiomatic crates-io Rust projects with an average warning density of 21/KLOC. The analysis identifies the most cost-effective lint fixes, offering valuable opportunities for optimizing code quality. Secondly, we actively engage Rust developers through a user survey to garner invaluable feedback on their experiences with Clippy. User insights shed light on two crucial concerns: the prevalence of false positives in warnings and the need for auto-fix support for most warnings. Thirdly, building upon these findings, we engineer three innovative automated refactoring techniques to effectively fix the four most frequent Clippy lints. As a result, the warning density in Rosetta benchmarks has significantly decreased from 195/KLOC to an impressive 18/KLOC, already lower than the average density of the crates-io Rust projects. These results demonstrate tangible benefit and impact of our efforts in enhancing the overall code quality and maintainability for Rust developers

    Frustrated with Code Quality Issues? LLMs can Help!

    Full text link
    As software projects progress, quality of code assumes paramount importance as it affects reliability, maintainability and security of software. For this reason, static analysis tools are used in developer workflows to flag code quality issues. However, developers need to spend extra efforts to revise their code to improve code quality based on the tool findings. In this work, we investigate the use of (instruction-following) large language models (LLMs) to assist developers in revising code to resolve code quality issues. We present a tool, CORE (short for COde REvisions), architected using a pair of LLMs organized as a duo comprised of a proposer and a ranker. Providers of static analysis tools recommend ways to mitigate the tool warnings and developers follow them to revise their code. The \emph{proposer LLM} of CORE takes the same set of recommendations and applies them to generate candidate code revisions. The candidates which pass the static quality checks are retained. However, the LLM may introduce subtle, unintended functionality changes which may go un-detected by the static analysis. The \emph{ranker LLM} evaluates the changes made by the proposer using a rubric that closely follows the acceptance criteria that a developer would enforce. CORE uses the scores assigned by the ranker LLM to rank the candidate revisions before presenting them to the developer. CORE could revise 59.2% Python files (across 52 quality checks) so that they pass scrutiny by both a tool and a human reviewer. The ranker LLM is able to reduce false positives by 25.8% in these cases. CORE produced revisions that passed the static analysis tool in 76.8% Java files (across 10 quality checks) comparable to 78.3% of a specialized program repair tool, with significantly much less engineering efforts

    Automatic Software Repair: a Bibliography

    Get PDF
    This article presents a survey on automatic software repair. Automatic software repair consists of automatically finding a solution to software bugs without human intervention. This article considers all kinds of repairs. First, it discusses behavioral repair where test suites, contracts, models, and crashing inputs are taken as oracle. Second, it discusses state repair, also known as runtime repair or runtime recovery, with techniques such as checkpoint and restart, reconfiguration, and invariant restoration. The uniqueness of this article is that it spans the research communities that contribute to this body of knowledge: software engineering, dependability, operating systems, programming languages, and security. It provides a novel and structured overview of the diversity of bug oracles and repair operators used in the literature

    Perception and Acceptance of an Autonomous Refactoring Bot

    Full text link
    The use of autonomous bots for automatic support in software development tasks is increasing. In the past, however, they were not always perceived positively and sometimes experienced a negative bias compared to their human counterparts. We conducted a qualitative study in which we deployed an autonomous refactoring bot for 41 days in a student software development project. In between and at the end, we conducted semi-structured interviews to find out how developers perceive the bot and whether they are more or less critical when reviewing the contributions of a bot compared to human contributions. Our findings show that the bot was perceived as a useful and unobtrusive contributor, and developers were no more critical of it than they were about their human colleagues, but only a few team members felt responsible for the bot.Comment: 8 pages, 2 figures. To be published at 12th International Conference on Agents and Artificial Intelligence (ICAART 2020

    Mining Fix Patterns for FindBugs Violations

    Get PDF
    In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.Comment: Accepted for IEEE Transactions on Software Engineerin

    Quieting the Static: A Study of Static Analysis Alert Suppressions

    Full text link
    Static analysis tools are commonly used to detect defects before the code is released. Previous research has focused on their overall effectiveness and their ability to detect defects. However, little is known about the usage patterns of warning suppressions: the configurations developers set up in order to prevent the appearance of specific warnings. We address this gap by analyzing how often are warning suppression features used, which warning suppression features are used and for what purpose, and also how could the use of warning suppression annotations be avoided. To answer these questions we examine 1\,425 open-source Java-based projects that utilize Findbugs or Spotbugs for warning-suppressing configurations and source code annotations. We find that although most warnings are suppressed, only a small portion of them get frequently suppressed. Contrary to expectations, false positives account for a minor proportion of suppressions. A significant number of suppressions introduce technical debt, suggesting potential disregard for code quality or a lack of appropriate guidance from the tool. Misleading suggestions and incorrect assumptions also lead to suppressions. Findings underscore the need for better communication and education related to the use of static analysis tools, improved bug pattern definitions, and better code annotation. Future research can extend these findings to other static analysis tools, and apply them to improve the effectiveness of static analysis.Comment: 11 pages, 4 figure
    corecore