68 research outputs found

    Perception and Acceptance of an Autonomous Refactoring Bot

    Full text link
    The use of autonomous bots for automatic support in software development tasks is increasing. In the past, however, they were not always perceived positively and sometimes experienced a negative bias compared to their human counterparts. We conducted a qualitative study in which we deployed an autonomous refactoring bot for 41 days in a student software development project. In between and at the end, we conducted semi-structured interviews to find out how developers perceive the bot and whether they are more or less critical when reviewing the contributions of a bot compared to human contributions. Our findings show that the bot was perceived as a useful and unobtrusive contributor, and developers were no more critical of it than they were about their human colleagues, but only a few team members felt responsible for the bot.Comment: 8 pages, 2 figures. To be published at 12th International Conference on Agents and Artificial Intelligence (ICAART 2020

    On the Unification of Megamodels

    Get PDF
    Through the more and more widespread application of model-driven engineering (MDE) and the increasing diversity in applied modeling paradigms within single projects, there is an increasing need to capture not only models in isolation but also their relations.This paper is a survey on techniques capturing such relations, such as megamodels or macromodels, based on existing scientific literature. Therefore, we consider various definitions of these techniques. We further examine characteristics of the different techniques.We will propose a unified core definition of a megamodel that captures the core properties of megamodels and which can be extended to the needs of the different applications of megamodels.Finally, we give an outlook on arising application areas for megamodels

    Predicting build outcomes in continuous integration using textual analysis of source code commits

    Get PDF
    Machine learning has been increasingly used to solve various software engineering tasks. One example of its usage is to predict the outcome of builds in continuous integration, where a classifier is built to predict whether new code commits will successfully compile. The aim of this study is to investigate the effectiveness of fifteen software metrics in building a classifier for build outcome prediction. Particularly, we implemented an experiment wherein we compared the effectiveness of a line-level metric and fourteen other traditional software metrics on 49,040 build records that belong to 117 Java projects. We achieved an average precision of 91% and recall of 80% when using the line-level metric for training, compared to 90% precision and 76% recall for the next best traditional software metric. In contrast, using file-level metrics was found to yield a higher predictive quality (average MCC for the best software metric= 68%) than the line-level metric (average MCC= 16%) for the failed builds. We conclude that file-level metrics are better predictors of build outcomes for the failed builds, whereas the line-level metric is a slightly better predictor of passed builds

    Facing the Giant: a Grounded Theory Study of Decision-Making in Microservices Migrations

    Full text link
    Background: Microservices migrations are challenging and expensive projects with many decisions that need to be made in a multitude of dimensions. Existing research tends to focus on technical issues and decisions (e.g., how to split services). Equally important organizational or business issues and their relations with technical aspects often remain out of scope or on a high level of abstraction. Aims: In this study, we aim to holistically chart the decision-making that happens on all dimensions of a migration project towards microservices (including, but not limited to, the technical dimension). Method: We investigate 16 different migration cases in a grounded theory interview study, with 19 participants that recently migrated towards microservices. This study strongly focuses on the human aspects of a migration, through stakeholders and their decisions. Results: We identify 3 decision-making processes consisting of 22decision-points and their alternative options. The decision-points are related to creating stakeholder engagement and assessing feasibility, technical implementation, and organizational restructuring. Conclusions: Our study provides an initial theory of decision-making in migrations to microservices. It also outfits practitioners with a roadmap of which decisions they should be prepared to make and at which point in the migration.Comment: 11 pages, 7 figure

    Predicting Test Case Verdicts Using TextualAnalysis of Commited Code Churns

    Get PDF
    Background: Continuous Integration (CI) is an agile software development practice that involves producing several clean builds of the software per day. The creation of these builds involve running excessive executions of automated tests, which is hampered by high hardware cost and reduced development velocity. Goal: The goal of our research is to develop a method that reduces the number of executed test cases at each CI cycle.Method: We adopt a design research approach with an infrastructure provider company to develop a method that exploits Ma-chine Learning (ML) to predict test case verdicts for committed sourcecode. We train five different ML models on two data sets and evaluate their performance using two simple retrieval measures: precision and recall. Results: While the results from training the ML models on the first data-set of test executions revealed low performance, the curated data-set for training showed an improvement on performance with respect to precision and recall. Conclusion: Our results indicate that the method is applicable when training the ML model on churns of small size

    A Rapid Prototyping Language Workbench for Textual DSLs based on Xtext: Vision and Progress

    Full text link
    Metamodel-based DSL development in language workbenches like Xtext allows language engineers to focus more on metamodels and domain concepts rather than grammar details. However, the grammar generated from metamodels often requires manual modification, which can be tedious and time-consuming. Especially when it comes to rapid prototyping and language evolution, the grammar will be generated repeatedly, this means that language engineers need to repeat such manual modification back and forth. Previous work introduced GrammarOptimizer, which automatically improves the generated grammar using optimization rules. However, the optimization rules need to be configured manually, which lacks user-friendliness and convenience. In this paper, we present our vision for and current progress towards a language workbench that integrates GrammarOptimizer's grammar optimization rules to support rapid prototyping and evolution of metamodel-based languages. It provides a visual configuration of optimization rules and a real-time preview of the effects of grammar optimization to address the limitations of GrammarOptimizer. Furthermore, it supports the inference of a grammar based on examples from model instances and offers a selection of language styles. These features aim to enhance the automation level of metamodel-based DSL development with Xtext and assist language engineers in iterative development and rapid prototyping. Our paper discusses the potential and applications of this language workbench, as well as how it fills the gaps in existing language workbenches.Comment: 6 pages, 3 figure

    Selective Regression Testing based on Big Data: Comparing Feature Extraction Techniques

    Get PDF
    Regression testing is a necessary activity in continuous integration (CI) since it provides confidence that modified parts of the system are correct at each integration cycle. CI provides large volumes of data which can be used to support regression testing activities. By using machine learning, patterns about faulty changes in the modified program can be induced, allowing test orchestrators to make inferences about test cases that need to be executed at each CI cycle. However, one challenge in using learning models lies in finding a suitable way for characterizing source code changes and preserving important information. In this paper, we empirically evaluate the effect of three feature extraction algorithms on the performance of an existing ML-based selective regression testing technique. We designed and performed an experiment to empirically investigate the effect of Bag of Words (BoW), Word Embeddings (WE), and content-based feature extraction (CBF). We used stratified cross validation on the space of features generated by the three FE techniques and evaluated the performance of three machine learning models using the precision and recall metrics. The results from this experiment showed a significant difference between the models\u27 precision and recall scores, suggesting that the BoW-fed model outperforms the other two models with respect to precision, whereas a CBF-fed model outperforms the rest with respect to recall

    A classification of code changes and test types dependencies for improving machine learning based test selection

    Get PDF
    Machine learning has been increasingly used to solve various software engineering tasks. One example of their usage is in regression testing, where a classifier is built using historical code commits to predict which test cases require execution. In this paper, we address the problem of how to link specific code commits to test types to improve the predictive performance of learning models in improving regression testing. We design a dependency taxonomy of the content of committed code and the type of a test case. The taxonomy focuses on two types of code commits: changing memory management and algorithm complexity. We reviewed the literature, surveyed experienced testers from three Swedish-based software companies, and conducted a workshop to develop the taxonomy. The derived taxonomy shows that memory management code should be tested with tests related to performance, load, soak, stress, volume, and capacity; the complexity changes should be tested with the same dedicated tests and maintainability tests. We conclude that this taxonomy can improve the effectiveness of building learning models for regression testing

    Walking Through the Method Zoo: Does Higher Education Really Meet Software Industry Demands?

    Get PDF
    Software engineering educators are continually challenged by rapidly evolving concepts, technologies, and industry demands. Due to the omnipresence of software in a digitalized society, higher education institutions (HEIs) have to educate the students such that they learn how to learn, and that they are equipped with a profound basic knowledge and with latest knowledge about modern software and system development. Since industry demands change constantly, HEIs are challenged in meeting such current and future demands in a timely manner. This paper analyzes the current state of practice in software engineering education. Specifically, we want to compare contemporary education with industrial practice to understand if frameworks, methods and practices for software and system development taught at HEIs reflect industrial practice. For this, we conducted an online survey and collected information about 67 software engineering courses. Our findings show that development approaches taught at HEIs quite closely reflect industrial practice. We also found that the choice of what process to teach is sometimes driven by the wish to make a course successful. Especially when this happens for project courses, it could be beneficial to put more emphasis on building learning sequences with other courses
    corecore