936 research outputs found

    From RSSE to BotSE: Potentials and Challenges Revisited after 15 Years

    Full text link
    Both recommender systems and bots should proactively and smartly answer the questions of software developers or other project stakeholders to assist them in performing their tasks more efficiently. This paper reflects on the achievements from the more mature area of Recommendation Systems in Software Engineering (RSSE) as well as the rising area of Bots in Software Engineering (BotSE). We discuss the similarities and differences, briefly review current state of the art, and highlight three particular areas, in which the full potential is yet to be tapped: a more socio-technical context awareness, assisting knowledge sharing in addition to knowledge access, as well as covering repetitive or stimulative scenarios related to requirements and user-developer interaction

    An Empirical Study of Bots in Software Development -- Characteristics and Challenges from a Practitioner's Perspective

    Full text link
    Software engineering bots - automated tools that handle tedious tasks - are increasingly used by industrial and open source projects to improve developer productivity. Current research in this area is held back by a lack of consensus of what software engineering bots (DevBots) actually are, what characteristics distinguish them from other tools, and what benefits and challenges are associated with DevBot usage. In this paper we report on a mixed-method empirical study of DevBot usage in industrial practice. We report on findings from interviewing 21 and surveying a total of 111 developers. We identify three different personas among DevBot users (focusing on autonomy, chat interfaces, and "smartness"), each with different definitions of what a DevBot is, why developers use them, and what they struggle with. We conclude that future DevBot research should situate their work within our framework, to clearly identify what type of bot the work targets, and what advantages practitioners can expect. Further, we find that there currently is a lack of general purpose "smart" bots that go beyond simple automation tools or chat interfaces. This is problematic, as we have seen that such bots, if available, can have a transformative effect on the projects that use them.Comment: To be published at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE

    An Additional Set of (Automated) Eyes: Chatbots for Agile Retrospectives

    Full text link
    Recent advances in natural-language processing and data analysis allow software bots to become virtual team members, providing an additional set of automated eyes and additional perspectives for informing and supporting teamwork. In this paper, we propose employing chatbots in the domain of software development with a focus on supporting analyses and measurements of teams' project data. The software project artifacts produced by agile teams during regular development activities, e.g. commits in a version control system, represent detailed information on how a team works and collaborates. Analyses of this data are especially relevant for agile retrospective meetings, where adaptations and improvements to the executed development process are discussed. Development teams can use these measurements to track the progress of identified improvement actions over development iterations. Chatbots provide a convenient user interface for interacting with the outcomes of retrospectives and the associated measurements in a chat-based channel that is already being employed by team members.Comment: Accepted at the 1st International Workshop on Bots in Software Engineering (May 28th, 2019, Montreal, Canada), collocated with ICSE 2019 (https://botse.github.io/

    Challenges and guidelines on designing test cases for test bots

    Get PDF
    Test bots are automated testing tools that autonomously and periodically run a set of test cases that check whether the system under test meets the requirements set forth by the customer. The automation decreases the amount of time a development team spends on testing. As development projects become larger, it is important to focus on improving the test bots by designing more effective test cases because otherwise time and usage costs can increase greatly and misleading conclusions from test results might be drawn, such as false positives in the test execution. However, literature currently lacks insights on how test case design affects the effectiveness of test bots. This paper uses a case study approach to investigate those effects by identifying challenges in designing tests for test bots. Our results include guidelines for test design schema for such bots that support practitioners in overcoming the challenges mentioned by participants during our study.Comment: To be published in IEEE/ACM 42nd International Conference on Software Engineering Workshops (ICSEW'20), May 23--29, 2020, Seoul, Republic of Kore

    Factoring Expertise, Workload, and Turnover into Code Review Recommendation

    Full text link
    Developer turnover is inevitable on software projects and leads to knowledge loss, a reduction in productivity, and an increase in defects. Mitigation strategies to deal with turnover tend to disrupt and increase workloads for developers. In this work, we suggest that through code review recommendation we can distribute knowledge and mitigate turnover while more evenly distributing review workload. We conduct historical analyses to understand the natural concentration of review workload and the degree of knowledge spreading that is inherent in code review. Even though review workload is highly concentrated, we show that code review natural spreads knowledge thereby reducing the files at risk to turnover. Using simulation, we evaluate existing code review recommenders and develop novel recommenders to understand their impact on the level of expertise during review, the workload of reviewers, and the files at risk to turnover. Our simulations use seeded random replacement of reviewers to allow us to compare the reviewer recommenders without the confounding variation of different reviewers being replaced for each recommender. Combining recommenders, we develop the SofiaWL recommender that suggests experts with low active review workload when none of the files under review are known by only one developer. In contrast, when knowledge is concentrated on one developer, it sends the review to other reviewers to spread knowledge. For the projects we study, we are able to globally increase expertise during reviews, +3%, reduce workload concentration, -12%, and reduce the files at risk, -28%. We make our scripts and data available in our replication package. Developers can optimize for a particular outcome measure based on the needs of their project, or use our GitHub bot to automatically balance the outcomes

    Promises and Perils of Mining Software Package Ecosystem Data

    Full text link
    The use of third-party packages is becoming increasingly popular and has led to the emergence of large software package ecosystems with a maze of inter-dependencies. Since the reliance on these ecosystems enables developers to reduce development effort and increase productivity, it has attracted the interest of researchers: understanding the infrastructure and dynamics of package ecosystems has given rise to approaches for better code reuse, automated updates, and the avoidance of vulnerabilities, to name a few examples. But the reality of these ecosystems also poses challenges to software engineering researchers, such as: How do we obtain the complete network of dependencies along with the corresponding versioning information? What are the boundaries of these package ecosystems? How do we consistently detect dependencies that are declared but not used? How do we consistently identify developers within a package ecosystem? How much of the ecosystem do we need to understand to analyse a single component? How well do our approaches generalise across different programming languages and package ecosystems? In this chapter, we review promises and perils of mining the rich data related to software package ecosystems available to software engineering researchers.Comment: Submitted as a Book Chapte

    Mitigating Turnover with Code Review Recommendation: Balancing Expertise, Workload, and Knowledge Distribution

    Get PDF
    Developer turnover is inevitable on software projects and leads to knowledge loss, a reduction in productivity, and an increase in defects. Mitigation strategies to deal with turnover tend to disrupt and increase workloads for developers. In this work, we suggest that through code review recommendation we can distribute knowledge and mitigate turnover with minimal impact on the development process. We evaluate review recommenders in the context of ensuring expertise during review, Expertise, reducing the review workload of the core team, CoreWorkload, and reducing the Files at Risk to turnover, FaR. We find that prior work that assigns reviewers based on file ownership concentrates knowledge on a small group of core developers increasing risk of knowledge loss from turnover by up to 65%. We propose learning and retention aware review recommenders that when combined are effective at reducing the risk of turnover by -29% but they unacceptably reduce the overall expertise during reviews by -26%. We develop the Sophia recommender that suggest experts when none of the files under review are hoarded by developers but distributes knowledge when files are at risk. In this way, we are able to simultaneously increase expertise during review with a ΔExpertise of 6%, with a negligible impact on workload of ΔCoreWorkload of 0.09%, and reduce the files at risk by ΔFaR -28%. Sophia is integrated into GitHub pull requests allowing developers to select an appropriate expert or “learner” based on the context of the review. We release the Sophia bot as well as the code and data for replication purposes
    • …
    corecore