7 research outputs found
Mapping a Landscape of Developer Assisting Software Bots
Bots in software development have gained traction in research and in practice. However, there is no consensus on what properties and characteristics that define a bot. The term is used to describe a plethora of different tools with different usages, benefits and challenges. In this thesis we focus on bots for software developments (DevBots) with the goal to aid researchers in future studies involving DevBots. We aim to assist with the scoping and planning of such studies regarding what tools and related work to include or exclude from them. We do so by synthesising the different definitions of DevBots, combining views from literature and practitioners.To achieve this goal, quantitative and qualitative research methods are used including literature review and semi-structured interviews. We have created a faceted taxonomy for DevBots which categorises DevBots by their most prominent properties. In addition we investigated what delineated DevBots from plain old development tools. Our analysis shows that achieving one single definition is not possible. Instead we identify and name three personas, i.e., practitioner archetypes with different expectations and motivations. The chat bot persona (Charlie) mostly sees DevBots as information integration tools with a natural language interface, while for the autonomous bot persona (Alex) a DevBot is a tool that autonomously handles repetitive tasks. Lastly, for the smart bot persona (Sam), the defining feature of bots is its degree of ``smartness\u27\u27.We have identified a process in the form of a flowchart, which researchers can use to test whether their tool is considered a DevBot by any of our personas. We have concluded that this definition is not congruent with contemporary definitions as only 10 of 54 investigated tools from a large dataset were considered DevBots by our process. Finally we have shown how the definitions and process can be used in practice by using them in the scoping and planning phase of two recently conducted studies
Dependency management bots in open-source systems—prevalence and adoption
Bots have become active contributors in maintaining open-source repositories. However, the definitions of bot activity in open-source software vary from a more lenient stance encompassing every non-human contributions vs frameworks that cover contributions from tools that have autonomy or human-like traits (i.e., Devbots). Understanding which of those definitions are being used is essential to enable (i) reliable sampling of bots and (ii) fair comparison of their practical impact in, e.g., developers’ productivity. This paper reports on an empirical study composed of both quantitative and qualitative analysis of bot activity. By analysing those two bot definitions in an existing dataset of bot commits, we see that only 10 out of 54 listed tools (mainly dependency management) comply with the characteristics of Devbots. Moreover, five of those Devbots have similar patterns of contributions over 93 projects, such as similar proportions of merged pull-requests and days until issues are closed. Our analysis also reveals that most projects (77%) experiment with more than one bot before deciding to adopt or switch between bots. In fact, a thematic analysis of developers’ comments in those projects reveal factors driving the discussions about Devbot adoption or removal, such as the impact of the generated noise and the needed adaptation in development practices within the project
Visualizing test diversity to support test optimisation
Diversity has been used as an effective criteria to optimise test suites for
cost-effective testing. Particularly, diversity-based (alternatively referred
to as similarity-based) techniques have the benefit of being generic and
applicable across different Systems Under Test (SUT), and have been used to
automatically select or prioritise large sets of test cases. However, it is a
challenge to feedback diversity information to developers and testers since
results are typically many-dimensional. Furthermore, the generality of
diversity-based approaches makes it harder to choose when and where to apply
them. In this paper we address these challenges by investigating: i) what are
the trade-off in using different sources of diversity (e.g., diversity of test
requirements or test scripts) to optimise large test suites, and ii) how
visualisation of test diversity data can assist testers for test optimisation
and improvement. We perform a case study on three industrial projects and
present quantitative results on the fault detection capabilities and redundancy
levels of different sets of test cases. Our key result is that test similarity
maps, based on pair-wise diversity calculations, helped industrial
practitioners identify issues with their test repositories and decide on
actions to improve. We conclude that the visualisation of diversity information
can assist testers in their maintenance and optimisation activities
Challenges and guidelines on designing test cases for test bots
Test bots are automated testing tools that autonomously and periodically run
a set of test cases that check whether the system under test meets the
requirements set forth by the customer. The automation decreases the amount of
time a development team spends on testing. As development projects become
larger, it is important to focus on improving the test bots by designing more
effective test cases because otherwise time and usage costs can increase
greatly and misleading conclusions from test results might be drawn, such as
false positives in the test execution. However, literature currently lacks
insights on how test case design affects the effectiveness of test bots. This
paper uses a case study approach to investigate those effects by identifying
challenges in designing tests for test bots. Our results include guidelines for
test design schema for such bots that support practitioners in overcoming the
challenges mentioned by participants during our study.Comment: To be published in IEEE/ACM 42nd International Conference on Software
Engineering Workshops (ICSEW'20), May 23--29, 2020, Seoul, Republic of Kore
Current and Future Bots in Software Development
Bots that support software development ("DevBots") are seen as a promising approach to deal with the ever-increasing complexity of modern software engineering and development. Existing DevBots are already able to relieve developers from routine tasks such as building project images or keeping dependencies up-to-date. However, advances in machine learning and artificial intelligence hold the promise of future, significantly more advanced, DevBots. In this paper, we introduce the terminology of contemporary and ideal DevBots. Contemporary DevBots represent the current state of practice, which we characterise using a facet-based taxonomy. We exemplify this taxonomy using 11 existing, industrial-strength bots. We further provide a vision and definition of future (ideal) DevBots, which are not only autonomous, but also adaptive, as well as technically and socially competent. These properties may allow ideal DevBots to act more akin to artificial team mates than simple development tools
Visualizing test diversity to support test optimisation
Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers\ua0 in their maintenance and optimisation activities