16 research outputs found

    Improving Communication in Scrum Teams

    Full text link
    Communication in teams is an important but difficult issue. In a Scrum development process, we use the Daily Scrum meetings to inform others about important problems, news and events in the project. When persons are absent due to holiday, illness or travel, they miss relevant information because there is no document that protocols the content of these meetings. We present a concept and a Twitter-like tool that improves communication in a Scrum development process. We take advantage out of the observation that many people do not like to create documentation but they do like to share what they did. We used the tool in industrial practice and observed an improvement in communication

    Individual characteristics of successful coding challengers

    Get PDF
    Assessing a software engineer's problem-solving ability to algorithmic programming tasks has been an essential part of technical interviews at some of the most successful technology companies for several years now. Despite the adoption of coding challenges among these companies, we do not know what influences the performance of different software engineers in solving such coding challenges. We conducted an exploratory study with software engineering students to find hypothesis on what individual characteristics make a good coding challenge solver. Our findings show that the better coding challengers have also better exam grades and more programming experience. Furthermore, conscientious as well as sad software engineers performed worse in our study

    Perception and Acceptance of an Autonomous Refactoring Bot

    Full text link
    The use of autonomous bots for automatic support in software development tasks is increasing. In the past, however, they were not always perceived positively and sometimes experienced a negative bias compared to their human counterparts. We conducted a qualitative study in which we deployed an autonomous refactoring bot for 41 days in a student software development project. In between and at the end, we conducted semi-structured interviews to find out how developers perceive the bot and whether they are more or less critical when reviewing the contributions of a bot compared to human contributions. Our findings show that the bot was perceived as a useful and unobtrusive contributor, and developers were no more critical of it than they were about their human colleagues, but only a few team members felt responsible for the bot.Comment: 8 pages, 2 figures. To be published at 12th International Conference on Agents and Artificial Intelligence (ICAART 2020

    Evidence for the design of code comprehension experiments

    Get PDF
    Context: Valid studies establish confidence in scientific findings. However, to carefully assess a study design, specific domain knowledge is required in addition to general expertise in research methodologies. For example, in an experiment, the influence of a manipulated condition on an observation can be influenced by many other conditions. We refer to these as confounding variables. Knowing possible confounding variables in the thematic context is essential to be able to assess a study design. If certain confounding variables are not identified and consequently not controlled, this can pose a threat to the validity of the study results. Problem: So far, the assessment of the validity of a study is only intuitive. The potential bias of study findings due to confounding variables is thus speculative, rather than evidence-based. This leads to uncertainty in the design of studies, as well as disagreement in peer review. However, two barriers currently impede evidence-based evaluation of study designs. First, many of the suspected confounding variables have not yet been adequately researched to demonstrate their true effects. Second, there is a lack of a pragmatic method to synthesize the existing evidence from primary studies in a way that is easily accessible to researchers. Scope: We investigate the problem in the context of experimental research methods with human study participants and in the thematic context of code comprehension research. Contributions: We first systematically analyze the design choices in code comprehension experiments over the past 40 years and the threats to the validity of these studies. This forms the basis for a subsequent discussion of the wide variety of design options in the absence of evidence on their consequences and comparability. We then conduct experiments that provide evidence on the influence of intelligence, personality, and cognitive biases on code comprehension. While previously only speculating on the influence of these variables, we now have some initial data points on their actual influence. Finally, we show how combining different primary studies into evidence profiles facilitates evidence-based discussion of experimental designs. For the three most commonly discussed threats to validity in code comprehension experiments, we create evidence profiles and discuss their implications. Conclusion: Evidence for and against threats to validity can be found for frequently discussed threats. Such conflicting evidence is explained by the need to consider individual confounding variables in the context of a specific study design, rather than as a universal rule, as is often the case. Evidence profiles highlight such a spectrum of evidence and serve as an entry point for researchers to engage in an evidence-based discussion of their study design. However, as with all types of systematic secondary studies, the success of evidence profiles relies on publishing a sufficient number of studies on the same respective research question. This is a particular challenge in a research field where the novelty of a manuscript's research findings is one of the evaluation criteria of any major conference. Nevertheless, we are optimistic about the future, as even evidence profiles that will merely indicate that evidence on a particular controversial issue is scarce will make a contribution: they will identify opinionated assessments of study designs as such, as well as motivate additional studies to provide more evidence

    Evidence Profiles for Validity Threats in Program Comprehension Experiments

    Full text link
    Searching for clues, gathering evidence, and reviewing case files are all techniques used by criminal investigators to draw sound conclusions and avoid wrongful convictions. Similarly, in software engineering (SE) research, we can develop sound methodologies and mitigate threats to validity by basing study design decisions on evidence. Echoing a recent call for the empirical evaluation of design decisions in program comprehension experiments, we conducted a 2-phases study consisting of systematic literature searches, snowballing, and thematic synthesis. We found out (1) which validity threat categories are most often discussed in primary studies of code comprehension, and we collected evidence to build (2) the evidence profiles for the three most commonly reported threats to validity. We discovered that few mentions of validity threats in primary studies (31 of 409) included a reference to supporting evidence. For the three most commonly mentioned threats, namely the influence of programming experience, program length, and the selected comprehension measures, almost all cited studies (17 of 18) did not meet our criteria for evidence. We show that for many threats to validity that are currently assumed to be influential across all studies, their actual impact may depend on the design and context of each specific study. Researchers should discuss threats to validity within the context of their particular study and support their discussions with evidence. The present paper can be one resource for evidence, and we call for more meta-studies of this type to be conducted, which will then inform design decisions in primary studies. Further, although we have applied our methodology in the context of program comprehension, our approach can also be used in other SE research areas to enable evidence-based experiment design decisions and meaningful discussions of threats to validity.Comment: 13 pages, 4 figures, 5 tables. To be published at ICSE 2023: Proceedings of the 45th IEEE/ACM International Conference on Software Engineerin

    Resist the Hype! Practical Recommendations to Cope With R\'esum\'e-Driven Development

    Full text link
    Technology trends play an important role in the hiring process for software and IT professionals. In a recent study of 591 software professionals in both hiring (130) and technical (558) roles, we found empirical support for a tendency to overemphasize technology trends in r\'esum\'es and the application process. 60% of the hiring professionals agreed that such trends would influence their job advertisements. Among the software professionals, 82% believed that using trending technologies in their daily work would make them more attractive for potential future employers. This phenomenon has previously been reported anecdotally and somewhat humorously under the label R\'esum\'e-Driven Development (RDD). Our article seeks to initiate a more serious debate about the consequences of RDD on software development practice. We explain how the phenomenon may constitute a harmful self-sustaining dynamic, and provide practical recommendations for both the hiring and applicant perspectives to change the current situation for the better.Comment: 8 pages, 4 figure

    A Fine-grained Data Set and Analysis of Tangling in Bug Fixing Commits

    Get PDF
    Context: Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelevant for the study of bugs. Objective: We want to improve our understanding of the prevalence of tangling and the types of changes that are tangled within bug fixing commits. Methods: We use a crowd sourcing approach for manual labeling to validate which changes contribute to bug fixes for each line in bug fixing commits. Each line is labeled by four participants. If at least three participants agree on the same label, we have consensus. Results: We estimate that between 17% and 32% of all changes in bug fixing commits modify the source code to fix the underlying problem. However, when we only consider changes to the production code files this ratio increases to 66% to 87%. We find that about 11% of lines are hard to label leading to active disagreements between participants. Due to confirmed tangling and the uncertainty in our data, we estimate that 3% to 47% of data is noisy without manual untangling, depending on the use case. Conclusion: Tangled commits have a high prevalence in bug fixes and can lead to a large amount of noise in the data. Prior research indicates that this noise may alter results. As researchers, we should be skeptics and assume that unvalidated data is likely very noisy, until proven otherwise.Comment: Status: Accepted at Empirical Software Engineerin

    Towards an autonomous bot for automatic source code refactoring

    No full text
    Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment. In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities

    Dataset for "Beyond Self-Promotion: How Software Engineering Research Is Discussed on LinkedIn"

    No full text
    <p>This repository contains the artifacts of our study on how software engineering research papers are shared and interacted with on LinkedIn, a professional social network. This includes:</p> <ul> <li><em>included-papers.csv</em>: the list of the 79 ICSE and FSE papers we found on LinkedIn</li> <li><em>linkedin-post-data.csv</em>: the final data of the 98 LinkedIn posts we collected and synthesized</li> <li><em>linkedin-post-scraping.zip</em>: the scripts used to automatically collect several attributes of the LinkedIn posts</li> <li><em>analysis.zip</em>: the Jupyter notebook used to analyze and visualize <em>linkedin-post-data.csv</em></li> </ul&gt
    corecore