284 research outputs found

    What to Fix? Distinguishing between design and non-design rules in automated tools

    Full text link
    Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on Software Architecture 2017 (Gothenburg, SE

    A Study of Documentation for Software Architecture

    Full text link
    Documentation is an important mechanism for disseminating software architecture knowledge. Software project teams can employ vastly different formats for documenting software architecture, from unstructured narratives to standardized documents. We explored to what extent this documentation format may matter to newcomers joining a software project and attempting to understand its architecture. We conducted a controlled questionnaire-based study wherein we asked 65 participants to answer software architecture understanding questions using one of two randomly-assigned documentation formats: narrative essays, and structured documents. We analyzed the factors associated with answer quality using a Bayesian ordered categorical regression and observed no significant association between the format of architecture documentation and performance on architecture understanding tasks. Instead, prior exposure to the source code of the system was the dominant factor associated with answer quality. We also observed that answers to questions that require applying and creating activities were statistically significantly associated with the use of the system's source code to answer the question, whereas the document format or level of familiarity with the system were not. Subjective sentiment about the documentation format was comparable: Although more participants agreed that the structured document was easier to navigate and use for writing code, this relation was not statistically significant. We conclude that, in the limited experimental context studied, our results contradict the hypothesis that the format of architectural documentation matters. We surface two more important factors related to effective use of software architecture documentation: prior familiarity with the source code, and the type of architectural information sought.Comment: accepted to EMSE

    Registered Reports in Software Engineering

    Full text link
    Registered reports are scientific publications which begin the publication process by first having the detailed research protocol, including key research questions, reviewed and approved by peers. Subsequent analysis and results are published with minimal additional review, even if there was no clear support for the underlying hypothesis, as long as the approved protocol is followed. Registered reports can prevent several questionable research practices and give early feedback on research designs. In software engineering research, registered reports were first introduced in the International Conference on Mining Software Repositories (MSR) in 2020. They are now established in three conferences and two pre-eminent journals, including Empirical Software Engineering. We explain the motivation for registered reports, outline the way they have been implemented in software engineering, and outline some ongoing challenges for addressing high quality software engineering research.Comment: in press as EMSE J. commen

    Cross-Dataset Design Discussion Mining

    Full text link
    Being able to identify software discussions that are primarily about design, which we call design mining, can improve documentation and maintenance of software systems. Existing design mining approaches have good classification performance using natural language processing (NLP) techniques, but the conclusion stability of these approaches is generally poor. A classifier trained on a given dataset of software projects has so far not worked well on different artifacts or different datasets. In this study, we replicate and synthesize these earlier results in a meta-analysis. We then apply recent work in transfer learning for NLP to the problem of design mining. However, for our datasets, these deep transfer learning classifiers perform no better than less complex classifiers. We conclude by discussing some reasons behind the transfer learning approach to design mining.Comment: accepted for SANER 2020, Feb, London, ON. 12 pages. Replication package: https://doi.org/10.5281/zenodo.359012

    Error Identification Strategies for Python Jupyter Notebooks

    Full text link
    Computational notebooks -- such as Jupyter or Colab -- combine text and data analysis code. They have become ubiquitous in the world of data science and exploratory data analysis. Since these notebooks present a different programming paradigm than conventional IDE-driven programming, it is plausible that debugging in computational notebooks might also be different. More specifically, since creating notebooks blends domain knowledge, statistical analysis, and programming, the ways in which notebook users find and fix errors in these different forms might be different. In this paper, we present an exploratory, observational study on how Python Jupyter notebook users find and understand potential errors in notebooks. Through a conceptual replication of study design investigating the error identification strategies of R notebook users, we presented users with Python Jupyter notebooks pre-populated with common notebook errors -- errors rooted in either the statistical data analysis, the knowledge of domain concepts, or in the programming. We then analyzed the strategies our study participants used to find these errors and determined how successful each strategy was at identifying errors. Our findings indicate that while the notebook programming environment is different from the environments used for traditional programming, debugging strategies remain quite similar. It is our hope that the insights presented in this paper will help both notebook tool designers and educators make changes to improve how data scientists discover errors more easily in the notebooks they write.Comment: 11 pages, 5 listings, 7 tables, to be published at ICPC 202
    • …
    corecore