57 research outputs found

    Does Code Review Speed Matter for Practitioners?

    Get PDF
    Increasing code velocity is a common goal for a variety of software projects. The efficiency of the code review process significantly impacts how fast the code gets merged into the final product and reaches the customers. We conducted a survey to study the code velocity-related beliefs and practices in place. We analyzed 75 completed surveys from 39 participants from the industry and 36 from the open-source community. Our critical findings are (a) the industry and open-source community hold a similar set of beliefs, (b) quick reaction time is of utmost importance and applies to the tooling infrastructure and the behavior of other engineers, (c) time-to merge is the essential code review metric to improve, (d) engineers have differing opinions about the benefits of increased code velocity for their career growth, and (e) the controlled application of the commit-then-review model can increase code velocity. Our study supports the continued need to invest in and improve code velocity regardless of the underlying organizational ecosystem

    The Unexplored Terrain of Compiler Warnings

    Get PDF
    The authors' industry experiences suggest that compiler warnings, a lightweight version of program analysis, are valuable early bug detection tools. Significant costs are associated with patches and security bulletins for issues that could have been avoided if compiler warnings were addressed. Yet, the industry's attitude towards compiler warnings is mixed. Practices range from silencing all compiler warnings to having a zero-tolerance policy as to any warnings. Current published data indicates that addressing compiler warnings early is beneficial. However, support for this value theory stems from grey literature or is anecdotal. Additional focused research is needed to truly assess the cost-benefit of addressing warnings

    Quantifying Daily Evolution of Mobile Software Based on Memory Allocator Churn

    Get PDF
    The pace and volume of code churn necessary to evolve modern software systems present challenges for analyzing the performance impact of any set of code changes. Traditional methods used in performance analysis rely on extensive data collection and profiling, which often takes days. For large organizations utilizing Continuous Integration (CI) and Continuous Deployment (CD), these traditional techniques often fail to provide timely and actionable data. A different impact analysis method that allows for more efficient detection of performance regressions is needed. We propose the utilization of user mode memory allocator churn as a novel approach to performance engineering. User mode allocator churn acts as a proxy metric to evaluate the relative change in the cost of specific tasks. We prototyped the memory allocation churn methodology while engaged in performance engineering for an iOS version of application X. We find that calculating and analyzing memory allocator churn (a) results in deterministic measurements, (b) is efficient for determining the presence of both individual performance regressions and general performance-related trends, and (c) is a suitable alternative to measuring the task completion time

    The Unexplored Treasure Trove of Phabricator Code Reviews

    Get PDF

    What Warnings Do Engineers Really Fix?:The Compiler That Cried Wolf

    Get PDF
    Build logs from a variety of Continuous Integration (CI) systems contain temporal data about the presence and distribution of compiler warnings. Results from the analysis and mining of that data will indicate what warnings engineers find useful and fix, or continuously ignore. The findings will include resolution times and resolution types for different warning categories. That data will help compiler developers adjust the warning levels according to the ground truth, clarify the diagnostic messages, and improve the non-actionable warnings. The empirical findings will also help engineers to decide what warnings are worth fixing and which ones are no

    The Unexplored Treasure Trove of Phabricator Code Reviews

    Get PDF
    Phabricator is a modern code collaboration tool used by popular projects like FreeBSD and Mozilla. However, unlike the other well-known code review environments, such as Gerrit or GitHub, there is no readily accessible public code review dataset for Phabricator. This paper describes our experience mining code reviews from five different projects that use Phabricator (Blender, FreeBSD, KDE, LLVM, and Mozilla). We discuss the challenges associated with the data retrieval process and our solutions, resulting in a dataset with details regarding 317,476 Phabricator code reviews. Our dataset11https://doi.org/10.6084/m9.figshare.17139245 is available in both JSON and MySQL database dump formats. The dataset enables analyses of the history of code reviews at a more granular level than other platforms. In addition, given that the projects we mined are publicly accessible via the Conduit API [18], our dataset can be used as a foundation to fetch additional details and insights

    Are We Speeding Up or Slowing Down? On Temporal Aspects of Code Velocity

    Get PDF
    This paper investigates how the duration of various code review periods changes over a projects' lifetime. We study four open-source software (OSS) projects: Blender, FreeBSD, LLVM, and Mozilla. We mine and analyze the characteristics of 283,235 code reviews that cover, on average, seven years' worth of development. Our main conclusion is that neither the passage of time or the project's size impact code velocity. We find that (a) the duration of various code review periods (time-to-first-response, time-to-accept, and time-to-merge) for FreeBSD, LLVM, and Mozilla either becomes shorter or stays the same; no directional trend is present for Blender, (b) an increase in the size of the code bases (annually 3-17%) does not accompany a decrease in code velocity, and (c) for FreeBSD, LLVM, and Mozilla, the 30-day moving median stays in a fixed range for time-to-merge. These findings do not change with variabilities in code churn metrics, such as the number of commits or distinct authors of code changes.Comment: 5 pages. To be published in Proceedings of MSR '23: Proceedings of the 20th International Conference on Mining Software Repositories (MSR 2023). May 15-16, 2023, Melbourne, Australi

    Promises and Perils of Inferring Personality on GitHub

    Get PDF
    Personality plays a pivotal role in our understanding of human actions and behavior. Today, the applications of personality are widespread, built on the solutions from psychology to infer personality. In software engineering, for instance, one widely used solution to infer personality uses textual communication data. As studies on personality in software engineering continue to grow, it is imperative to understand the performance of these solutions. This paper compares the inferential ability of three widely studied text-based personality tests against each other and the ground truth on GitHub. We explore the challenges and potential solutions to improve the inferential ability of personality tests. Our study shows that solutions for inferring personality are far from being perfect. Software engineering communications data can infer individual developer personality with an average error rate of 41%. In the best case, the error rate can be reduced up to 36% by following our recommendations
    • …
    corecore