82,321 research outputs found

    What to Fix? Distinguishing between design and non-design rules in automated tools

    Full text link
    Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on Software Architecture 2017 (Gothenburg, SE

    Improving primary teachers’ subject knowledge across the curriculum: a summary of evidence from subject surveys (excluding English and mathematics) 2007/08

    Get PDF

    Annotated bibliography of Software Engineering Laboratory literature

    Get PDF
    An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. All materials have been grouped into eight general subject areas for easy reference: The Software Engineering Laboratory; The Software Engineering Laboratory: Software Development Documents; Software Tools; Software Models; Software Measurement; Technology Evaluations; Ada Technology; and Data Collection. Subject and author indexes further classify these documents by specific topic and individual author

    Lightweight Multilingual Software Analysis

    Full text link
    Developer preferences, language capabilities and the persistence of older languages contribute to the trend that large software codebases are often multilingual, that is, written in more than one computer language. While developers can leverage monolingual software development tools to build software components, companies are faced with the problem of managing the resultant large, multilingual codebases to address issues with security, efficiency, and quality metrics. The key challenge is to address the opaque nature of the language interoperability interface: one language calling procedures in a second (which may call a third, or even back to the first), resulting in a potentially tangled, inefficient and insecure codebase. An architecture is proposed for lightweight static analysis of large multilingual codebases: the MLSA architecture. Its modular and table-oriented structure addresses the open-ended nature of multiple languages and language interoperability APIs. We focus here as an application on the construction of call-graphs that capture both inter-language and intra-language calls. The algorithms for extracting multilingual call-graphs from codebases are presented, and several examples of multilingual software engineering analysis are discussed. The state of the implementation and testing of MLSA is presented, and the implications for future work are discussed.Comment: 15 page

    A Critical Review of "Automatic Patch Generation Learned from Human-Written Patches": Essay on the Problem Statement and the Evaluation of Automatic Software Repair

    Get PDF
    At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.Comment: ICSE 2014, India (2014

    Scoping review on interventions to improve adherence to reporting guidelines in health research

    Get PDF
    Objectives The goal of this study is to identify, analyse and classify interventions to improve adherence to reporting guidelines in order to obtain a wide picture of how the problem of enhancing the completeness of reporting of biomedical literature has been tackled so far. Design Scoping review. Search strategy We searched the MEDLINE, EMBASE and Cochrane Library databases and conducted a grey literature search for (1) studies evaluating interventions to improve adherence to reporting guidelines in health research and (2) other types of references describing interventions that have been performed or suggested but never evaluated. The characteristics and effect of the evaluated interventions were analysed. Moreover, we explored the rationale of the interventions identified and determined the existing gaps in research on the evaluation of interventions to improve adherence to reporting guidelines. Results 109 references containing 31 interventions (11 evaluated) were included. These were grouped into five categories: (1) training on the use of reporting guidelines, (2) improving understanding, (3) encouraging adherence, (4) checking adherence and providing feedback, and (5) involvement of experts. Additionally, we identified lack of evaluated interventions (1) on training on the use of reporting guidelines and improving their understanding, (2) at early stages of research and (3) after the final acceptance of the manuscript. Conclusions This scoping review identified a wide range of strategies to improve adherence to reporting guidelines that can be taken by different stakeholders. Additional research is needed to assess the effectiveness of many of these interventionsPeer ReviewedPostprint (author's final draft
    • …
    corecore