43 research outputs found

    The effects of change decomposition on code review -- a controlled experiment

    Get PDF
    Background: Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, literature provided no quantitative analysis of this hypothesis. Aims: (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding); (2) Qualitatively analyze how subjects approach the review and navigate the code, building knowledge and addressing existing issues, in large vs. decomposed changes. Method: Controlled experiment using the pull-based development model involving 28 software developers among professionals and graduate students. Results: Change decomposition leads to fewer wrongly reported issues, influences how subjects approach and conduct the review activity (by increasing context-seeking), yet impacts neither understanding the change rationale nor the number of found defects. Conclusions: Change decomposition reduces the noise for subsequent data analyses but also significantly supports the tasks of the developers in charge of reviewing the changes. As such, commits belonging to different concepts should be separated, adopting this as a best practice in software engineering

    Boost the Impact of Continuous Formal Verification in Industry

    Full text link
    Software model checking has experienced significant progress in the last two decades, however, one of its major bottlenecks for practical applications remains its scalability and adaptability. Here, we describe an approach to integrate software model checking techniques into the DevOps culture by exploiting practices such as continuous integration and regression tests. In particular, our proposed approach looks at the modifications to the software system since its last verification, and submits them to a continuous formal verification process, guided by a set of regression test cases. Our vision is to focus on the developer in order to integrate formal verification techniques into the developer workflow by using their main software development methodologies and tools.Comment: 7 page

    Pull request latency explained:an empirical overview

    Get PDF
    Pull request latency evaluation is an essential application of effort evaluation in the pull-based development scenario. It can help the reviewers sort the pull request queue, remind developers about the review processing time, speed up the review process and accelerate software development. There is a lack of work that systematically organizes the factors that affect pull request latency. Also, there is no related work discussing the differences and variations in characteristics in different scenarios and contexts. In this paper, we collected relevant factors through a literature review approach. Then we assessed their relative importance in five scenarios and six different contexts using the mixed-effects linear regression model. The most important factors differ in different scenarios. The length of the description is most important when pull requests are submitted. The existence of comments is most important when closing pull requests, using CI tools, and when the contributor and the integrator are different. When there exist comments, the latency of the first comment is the most important. Meanwhile, the influence of factors may change in different contexts. For example, the number of commits in a pull request has a more significant impact on pull request latency when closing than submitting due to changes in contributions brought about by the review process. Both human and bot comments are positively correlated with pull request latency. In contrast, the bot’s first comments are more strongly correlated with latency, but the number of comments is less correlated. Future research and tool implementation needs to consider the impact of different contexts. Researchers can conduct related studies based on our publicly available datasets and replication scripts

    Attracting Contributions to your GitHub Project

    Get PDF
    International audienceMost Open Source Software projects can only progress thanks to developers willing to voluntarily contribute. Therefore, their vitality and success largely depend on their ability to attract developers. Code hosting platforms like GitHub aim at making software development more collabo-rative and attractive for contributors by providing facilities such as issue-tracking, code review or team management on top of a Git repository following a pull-based model to handle external contributions. We study whether the use of these facilities actually help to get more contributions based on a quantitative analysis over a dataset composed by all the GitHub projects created in the last two years. We discovered that most projects actually ignore them and that, those that don't, do not advance faster either. A manual analysis of the most successful projects suggests that other factors like clear description of the contribution and gover-nance rules for the project have a greater impact

    Uma abordagem para simplificar utilização de analisadores estáticos e ferramentas de transformação de código

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2018.Ferramentas de análise estática auxiliam muito bem o desenvolvimento de software, detectando erros de programação e más práticas. Mesmo com seus benefícios claramente reconhecidos, elas ainda são muitas vezes subutilizadas. Existem diversas pesquisas que buscam identificar problemas em sua utilização e propor abordagens mais efetivas para incentivar o uso. Muitas abordagens propostas estão ligadas diretamente ao fluxo de trabalho dos desenvolvedores, indicando que a usabilidade das ferramentas de análise estática é fortemente impactada pelo jeito que os desenvolvedores programam no dia-a-dia. Neste estudo propomos Amanda-Bot, uma abordagem para a correção automática de problemas de código-fonte para o modelo pull-based development, um fluxo de trabalho que permite colaboração distribuída sobre uma base de código compartilhado e que está se tornando especialmente popular em start-ups e na comunidade de software de código aberto. Amanda-Bot funciona como um mecanismo baseado em bots, que observa o repositório do código-fonte e executa análises estáticas e transformações de código-fonte sobre o conjunto de mudanças, toda vez que uma modificação do código-fonte é enviada para um repositório de código-fonte. Nas situações em que o bot detecta um problema, ele gera automaticamente um patch e cria um pull-request para corrigí-lo. O principal objetivo do nosso modelo é criar correções automáticas com o objetivo de melhorar a experiência do desenvolvedor, seja reduzindo o esforço para corrigir os alertas ou simplesmente servindo como um exemplo motivacional de como a correção poderia ser. Nossa abordagem traz correções automáticas para o pull based development workflow e pode identificar quais características especificas desse modelo para afetam a adoção de bots e a geração de correções automáticas. Nós estamos usando AmandaBot em diversos projetos desde start-ups até empresa de engenharia de software em Brasília, Brasil. Em poucas semanas, AmandaBot enviou 17 pull requests (7 já foram aceitos), eliminando mais de 3500 code smells em 12 projetos.Static analysis tools greatly assist software development by detecting common programming mistakes and bad practices. Despite their recognized benefits, they are still underused, and thus several research works attempt to identify the problems existing approaches present and suggest changes to enhance their effectiveness. Such improvements are known to be highly workflow dependent, indicating that the way developers program on their daily basis have a large impact on the usefulness of static analysis tools. In this study we propose Amanda-Bot, an approach towards the automatic correction of source code issues for the pull-based development model, a workflow that enables distributed collaboration over a shared code base and that is becoming specially popular in start-ups and in the open source software community. Amanda-Bot works as a bot-based mechanism that watches the source code repository and runs static analyses and source code transformations over the change set, every time a source code modification is pushed to a source code repository. In the situations the bot detects an issue, it automatically generates a patch and creates a pull-request to fix it. The main rationale for our design is that automatic fixes have been found to improve developer experience, either by reducing the effort to correct the alarms or simply by serving as a motivating example of what a fix could be. Our approach brings automatic correction to the pull-based development model and is able to identify what specific characteristics of this model might affect bots adoption and automatic fixes generation. We have been using Amanda-Bot in several projects developed by start-ups from a software engineering industry area in Brasília, Brazil. In a few weeks, Amanda-Bot sent 17 pullrequests (7 have already been accepted), fixing more than 3500 code-smells in 12 systems
    corecore