5 research outputs found

    An Empirical Investigation of Pull Requests in Partially Distributed BizDevOps Teams

    Get PDF
    In globally distributed projects, virtual teams are often partially dispersed. One common setup occurs when several members from one company work with a large outsourcing vendor based in another country. Further, the introduction of the popular BizDevOps concept has increased the necessity to cooperate across departments and reduce the age-old disconnection between the business strategy and technical development. Establishing a good collaboration in partially distributed BizDevOps teams requires extensive collaboration and communication techniques. Nowadays, a common approach is to rely on collaboration through pull requests and frequent communication on Slack. To investigate barriers for pull requests in distributed teams, we examined an organization located in Scandinavia where cross-functional BizDevOps teams collaborated with off-site team members in India. Data were collected by conducting 14 interviews, observing 23 entire days with the team, and observing 37 meetings. We found that the pull-request approach worked very well locally but not across sites. We found barriers such as domain complexity, different agile processes (timeboxed vs. flow-based development), and employee turnover. Using an intellectual capital lens on our findings, we discuss barriers and positive and negative effects on the success of the pull-request approach

    Does code quality affect pull request acceptance? An empirical study

    Get PDF
    Background: Pull requests are a common practice for making contributions and reviewing them in both open-source and industrial contexts. Objective: Our goal is to understand whether quality flaws such as code smells, anti-patterns, security vulnerabilities, and coding style violations in a pull request's code affect the chance of its acceptance when reviewed by a maintainer of the project. Method: We conducted a case study among 28 Java open-source projects, analyzing the presence of 4.7 M code quality flaws in 36 K pull requests. We analyzed further correlations by applying logistic regression and six machine learning techniques. Moreover, we manually validated 10% of the pull requests to get further qualitative insights on the importance of quality issues in cases of acceptance and rejection. Results: Unexpectedly, quality flaws measured by PMD turned out not to affect the acceptance of a pull request at all. As suggested by other works, other factors such as the reputation of the maintainer and the importance of the delivered feature might be more important than other qualities in terms of pull request acceptance. Conclusions:. Researchers have already investigated the influence of the developers’ reputation and the pull request acceptance. This is the first work investigating code style violations and specifically PMD rules. We recommend that researchers further investigate this topic to understand if different measures or different tools could provide some useful measures.publishedVersionPeer reviewe

    A machine and deep learning analysis among SonarQube rules, product, and process metrics for fault prediction

    Get PDF
    Background: Developers spend more time fixing bugs refactoring the code to increase the maintainability than developing new features. Researchers investigated the code quality impact on fault-proneness, focusing on code smells and code metrics. Objective: We aim at advancing fault-inducing commit prediction using different variables, such as SonarQube rules, product, process metrics, and adopting different techniques. Method: We designed and conducted an empirical study among 29 Java projects analyzed with SonarQube and SZZ algorithm to identify fault-inducing and fault-fixing commits, computing different product and process metrics. Moreover, we investigated fault-proneness using different Machine and Deep Learning models. Results: We analyzed 58,125 commits containing 33,865 faults and infected by more than 174 SonarQube rules violated 1.8M times, on which 48 software product and process metrics were calculated. Results clearly identified a set of features that provided a highly accurate fault prediction (more than 95% AUC). Regarding the performance of the classifiers, Deep Learning provided a higher accuracy compared with Machine Learning models. Conclusion: Future works might investigate whether other static analysis tools, such as FindBugs or Checkstyle, can provide similar or different results. Moreover, researchers might consider the adoption of time series analysis and anomaly detection techniques.publishedVersionPeer reviewe
    corecore