2,258 research outputs found
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt
issues. Therefore, it is necessary to understand if and when refactoring
Technical Debt should be prioritized with respect to developing features or
fixing bugs. Objective. The goal of this study is to investigate the existing
body of knowledge in software engineering to understand what Technical Debt
prioritization approaches have been proposed in research and industry. Method.
We conducted a Systematic Literature Review among 384 unique papers published
until 2018, following a consolidated methodology applied in Software
Engineering. We included 38 primary studies. Results. Different approaches have
been proposed for Technical Debt prioritization, all having different goals and
optimizing on different criteria. The proposed measures capture only a small
part of the plethora of factors used to prioritize Technical Debt qualitatively
in practice. We report an impact map of such factors. However, there is a lack
of empirical and validated set of tools. Conclusion. We observed that technical
Debt prioritization research is preliminary and there is no consensus on what
are the important factors and how to measure them. Consequently, we cannot
consider current research conclusive and in this paper, we outline different
directions for necessary future investigations
What to Fix? Distinguishing between design and non-design rules in automated tools
Technical debt---design shortcuts taken to optimize for delivery speed---is a
critical part of long-term software costs. Consequently, automatically
detecting technical debt is a high priority for software practitioners.
Software quality tool vendors have responded to this need by positioning their
tools to detect and manage technical debt. While these tools bundle a number of
rules, it is hard for users to understand which rules identify design issues,
as opposed to syntactic quality. This is important, since previous studies have
revealed the most significant technical debt is related to design issues. Other
research has focused on comparing these tools on open source projects, but
these comparisons have not looked at whether the rules were relevant to design.
We conducted an empirical study using a structured categorization approach, and
manually classify 466 software quality rules from three industry tools---CAST,
SonarQube, and NDepend. We found that most of these rules were easily labeled
as either not design (55%) or design (19%). The remainder (26%) resulted in
disagreements among the labelers. Our results are a first step in formalizing a
definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on
Software Architecture 2017 (Gothenburg, SE
A Quality Model for Actionable Analytics in Rapid Software Development
Background: Accessing relevant data on the product, process, and usage
perspectives of software as well as integrating and analyzing such data is
crucial for getting reliable and timely actionable insights aimed at
continuously managing software quality in Rapid Software Development (RSD). In
this context, several software analytics tools have been developed in recent
years. However, there is a lack of explainable software analytics that software
practitioners trust. Aims: We aimed at creating a quality model (called
Q-Rapids quality model) for actionable analytics in RSD, implementing it, and
evaluating its understandability and relevance. Method: We performed workshops
at four companies in order to determine relevant metrics as well as product and
process factors. We also elicited how these metrics and factors are used and
interpreted by practitioners when making decisions in RSD. We specified the
Q-Rapids quality model by comparing and integrating the results of the four
workshops. Then we implemented the Q-Rapids tool to support the usage of the
Q-Rapids quality model as well as the gathering, integration, and analysis of
the required data. Afterwards we installed the Q-Rapids tool in the four
companies and performed semi-structured interviews with eight product owners to
evaluate the understandability and relevance of the Q-Rapids quality model.
Results: The participants of the evaluation perceived the metrics as well as
the product and process factors of the Q-Rapids quality model as
understandable. Also, they considered the Q-Rapids quality model relevant for
identifying product and process deficiencies (e.g., blocking code situations).
Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model
enables detecting problems that take more time to find manually and adds
transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by
IEEE in the 44th Euromicro Conference on Software Engineering and Advanced
Applications (SEAA) 2018. The final authenticated version will be available
onlin
Exploring Maintainability Assurance Research for Service- and Microservice-Based Systems: Directions and Differences
To ensure sustainable software maintenance and evolution, a diverse set of activities and concepts like metrics, change impact analysis, or antipattern detection can be used. Special maintainability assurance techniques have been proposed for service- and microservice-based systems, but it is difficult to get a comprehensive overview of this publication landscape. We therefore conducted a systematic literature review (SLR) to collect and categorize maintainability assurance approaches for service-oriented architecture (SOA) and microservices. Our search strategy led to the selection of 223 primary studies from 2007 to 2018 which we categorized with a threefold taxonomy: a) architectural (SOA, microservices, both), b) methodical (method or contribution of the study), and c) thematic (maintainability assurance subfield). We discuss the distribution among these categories and present different research directions as well as exemplary studies per thematic category. The primary finding of our SLR is that, while very few approaches have been suggested for microservices so far (24 of 223, ?11%), we identified several thematic categories where existing SOA techniques could be adapted for the maintainability assurance of microservices
Investigating Automatic Static Analysis Results to Identify Quality Problems: an Inductive Study
Background: Automatic static analysis (ASA) tools examine source code to discover "issues", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set. Results: We found promising results when predicting defect-prone files. A set of specific Resharper categories are better indicators of faulty files than common software metrics or the collection of issues of all issue categories, and these categories correlate to different software quality attributes. Conclusions: Our advice for future research is to perform analysis on file rather component level and to evaluate the generalizability of categories. We also recommend using larger datasets as we learned that data sparseness can lead to challenges in the proposed analysis proces
System Qualities Ontology, Tradespace and Affordability (SQOTA) Project – Phase 4
This task was proposed and established as a result of a pair of 2012 workshops sponsored by the DoD Engineered Resilient Systems technology priority area and by the SERC. The workshops focused on how best to strengthen DoD’s capabilities in dealing with its systems’ non-functional requirements, often also called system qualities, properties, levels of service, and –ilities. The term –ilities was often used during the workshops, and became the title of the resulting SERC research task: “ilities Tradespace and Affordability Project (iTAP).” As the project progressed, the term “ilities” often became a source of confusion, as in “Do your results include considerations of safety, security, resilience, etc., which don’t have “ility” in their names?” Also, as our ontology, methods, processes, and tools became of interest across the DoD and across international and standards communities, we found that the term “System Qualities” was most often used. As a result, we are changing the name of the project to “System Qualities Ontology, Tradespace, and Affordability (SQOTA).” Some of this year’s university reports still refer to the project as “iTAP.”This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant of Defense for Research and Engineering (ASD(R&E)) under Contract HQ0034-13-D-0004.This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant of Defense for Research and Engineering (ASD(R&E)) under Contract HQ0034-13-D-0004
- …