4,144 research outputs found

    A Longitudinal Study of Identifying and Paying Down Architectural Debt

    Full text link
    Architectural debt is a form of technical debt that derives from the gap between the architectural design of the system as it "should be" compared to "as it is". We measured architecture debt in two ways: 1) in terms of system-wide coupling measures, and 2) in terms of the number and severity of architectural flaws. In recent work it was shown that the amount of architectural debt has a huge impact on software maintainability and evolution. Consequently, detecting and reducing the debt is expected to make software more amenable to change. This paper reports on a longitudinal study of a healthcare communications product created by Brightsquid Secure Communications Corp. This start-up company is facing the typical trade-off problem of desiring responsiveness to change requests, but wanting to avoid the ever-increasing effort that the accumulation of quick-and-dirty changes eventually incurs. In the first stage of the study, we analyzed the status of the "before" system, which indicated the impacts of change requests. This initial study motivated a more in-depth analysis of architectural debt. The results of this analysis were used to motivate a comprehensive refactoring of the software system. The third phase of the study was a follow-on architectural debt analysis which quantified the improvements made. Using this quantitative evidence, augmented by qualitative evidence gathered from in-depth interviews with Brightsquid's architects, we present lessons learned about the costs and benefits of paying down architecture debt in practice.Comment: Submitted to ICSE-SEIP 201

    Detecting and Refactoring Operational Smells within the Domain Name System

    Full text link
    The Domain Name System (DNS) is one of the most important components of the Internet infrastructure. DNS relies on a delegation-based architecture, where resolution of names to their IP addresses requires resolving the names of the servers responsible for those names. The recursive structures of the inter dependencies that exist between name servers associated with each zone are called dependency graphs. System administrators' operational decisions have far reaching effects on the DNSs qualities. They need to be soundly made to create a balance between the availability, security and resilience of the system. We utilize dependency graphs to identify, detect and catalogue operational bad smells. Our method deals with smells on a high-level of abstraction using a consistent taxonomy and reusable vocabulary, defined by a DNS Operational Model. The method will be used to build a diagnostic advisory tool that will detect configuration changes that might decrease the robustness or security posture of domain names before they become into production.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    Structured Review of the Evidence for Effects of Code Duplication on Software Quality

    Get PDF
    This report presents the detailed steps and results of a structured review of code clone literature. The aim of the review is to investigate the evidence for the claim that code duplication has a negative effect on code changeability. This report contains only the details of the review for which there is not enough place to include them in the companion paper published at a conference (Hordijk, Ponisio et al. 2009 - Harmfulness of Code Duplication - A Structured Review of the Evidence)

    Using Modularity Metrics to assist Move Method Refactoring of Large System

    Full text link
    For large software systems, refactoring activities can be a challenging task, since for keeping component complexity under control the overall architecture as well as many details of each component have to be considered. Product metrics are therefore often used to quantify several parameters related to the modularity of a software system. This paper devises an approach for automatically suggesting refactoring opportunities on large software systems. We show that by assessing metrics for all components, move methods refactoring an be suggested in such a way to improve modularity of several components at once, without hindering any other. However, computing metrics for large software systems, comprising thousands of classes or more, can be a time consuming task when performed on a single CPU. For this, we propose a solution that computes metrics by resorting to GPU, hence greatly shortening computation time. Thanks to our approach precise knowledge on several properties of the system can be continuously gathered while the system evolves, hence assisting developers to quickly assess several solutions for reducing modularity issues

    Technical Debt Prioritization: State of the Art. A Systematic Literature Review

    Get PDF
    Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring Technical Debt should be prioritized with respect to developing features or fixing bugs. Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review among 384 unique papers published until 2018, following a consolidated methodology applied in Software Engineering. We included 38 primary studies. Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and optimizing on different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We report an impact map of such factors. However, there is a lack of empirical and validated set of tools. Conclusion. We observed that technical Debt prioritization research is preliminary and there is no consensus on what are the important factors and how to measure them. Consequently, we cannot consider current research conclusive and in this paper, we outline different directions for necessary future investigations

    Refactoring Process Models in Large Process Repositories.

    Get PDF
    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are introduced and complexity is increased. If no continuous investment is made in keeping models simple, changes are becoming increasingly costly and error-prone. Though refactoring techniques are widely used in software engineering to address related problems, this does not yet constitute state-of-the art in business process management. Process designers either have to refactor process models by hand or cannot apply respective techniques at all. This paper proposes a set of behaviour-preserving techniques for refactoring large process repositories. This enables process designers to eectively deal with model complexity by making process models better understandable and easier to maintain

    Refactorings of Design Defects using Relational Concept Analysis

    Get PDF
    Software engineers often need to identify and correct design defects, ıe} recurring design problems that hinder development and maintenance\ud by making programs harder to comprehend and--or evolve. While detection\ud of design defects is an actively researched area, their correction---mainly\ud a manual and time-consuming activity --- is yet to be extensively\ud investigated for automation. In this paper, we propose an automated\ud approach for suggesting defect-correcting refactorings using relational\ud concept analysis (RCA). The added value of RCA consists in exploiting\ud the links between formal objects which abound in a software re-engineering\ud context. We validated our approach on instances of the <span class='textit'></span>Blob\ud design defect taken from four different open-source programs
    corecore