342 research outputs found

    Are Smell-Based Metrics Actually Useful in Effort-Aware Structural Change-Proneness Prediction? An Empirical Study

    Get PDF
    Bad code smells (also named as code smells) are symptoms of poor design choices in implementation. Existing studies empirically confirmed that the presence of code smells increases the likelihood of subsequent changes (i.e., change-proness). However, to the best of our knowledge, no prior studies have leveraged smell-based metrics to predict particular change type (i.e., structural changes). Moreover, when evaluating the effectiveness of smell-based metrics in structural change-proneness prediction, none of existing studies take into account of the effort inspecting those change-prone source code. In this paper, we consider five smell-based metrics for effort-aware structural change-proneness prediction and compare these metrics with a baseline of well-known CK metrics in predicting particular categories of change types. Specifically, we first employ univariate logistic regression to analyze the correlation between each smellbased metric and structural change-proneness. Then, we build multivariate prediction models to examine the effectiveness of smell-based metrics in effort-aware structural change-proneness prediction when used alone and used together with the baseline metrics, respectively. Our experiments are conducted on six Java open-source projects with up to 60 versions and results indicate that: (1) all smell-based metrics are significantly related to structural change-proneness, except metric ANS in hive and SCM in camel after removing confounding effect of file size; (2) in most cases, smell-based metrics outperform the baseline metrics in predicting structural change-proneness; and (3) when used together with the baseline metrics, the smell-based metrics are more effective to predict change-prone files with being aware of inspection effort

    Definitions of a Software Smell

    Get PDF
    Many authors have defined smells from their perspective. This document attempts to provide a consolidated list of such definitions

    Evolution, survival and anomalies

    Get PDF
    Rio, A., & Abreu, F. B. E. (2023). PHP code smells in web apps: Evolution, survival and anomalies. Journal of Systems and Software, 200, 1-23. [111644]. https://doi.org/10.1016/j.jss.2023.111644Abstract Context: Code smells are symptoms of poor design, leading to future problems, such as reduced maintainability. Therefore, it becomes necessary to understand their evolution and how long they stay in code. This paper presents a longitudinal study on the evolution and survival of code smells (CS) for web apps built with PHP, the most widely used server-side programming language in web development and seldom studied. Objectives: We aimed to discover how CS evolve and what is their survival/lifespan in typical PHP web apps. Does CS survival depend on their scope or app life period? Are there sudden variations (anomalies) in the density of CS through the evolution of web apps? Method: We analyzed the evolution of 18 CS in 12 PHP web applications and compared it with changes in app and team size. We characterized the distribution of CS and used survival analysis techniques to study CS’ lifespan. We specialized the survival studies into localized (specific location) and scattered CS (spanning multiple classes/methods) categories. We further split the observations for each web app into two consecutive time frames. As for the CS evolution anomalies, we standardized their detection criteria. Results: The CS density trend along the evolution of PHP web apps is mostly stable, with variations, and correlates with the developer’s numbers. We identified the smells that survived the most. CS live an average of about 37% of the life of the applications, almost 4 years on average in our study; around 61% of CS introduced are removed. Most applications have different survival times for localized and scattered CS, and localized CS have a shorter life. The CS survival time is shorter and more CS are introduced and removed in the first half of the life of the applications. We found anomalies in the evolution of 5 apps and show how a graphical representation of sudden variations found in the evolution of CS unveils the story of a development project. Conclusion: CS stay a long time in code. The removal rate is low and did not change substantially in recent years. An effort should be made to avoid this bad behavior and change the CS density trend to decrease.publishersversionepub_ahead_of_prin

    On the Survival of Android Code Smells in the Wild

    Get PDF
    International audienceThe success of smartphones and app stores have contributed to the explosion of the number of mobile apps proposed to end-users. In this very competitive market, developers are rushed to regularly release new versions of their apps in order to retain users. Under such pressure, app developers may be tempted to adopt bad design or implementation choices, leading to the introduction of code smells. Mobile-specific code smells represent a real concern in mobile software engineering. Many studies have proposed tools to automatically detect their presence and quantify their impact on performance. However, there remains—so far—no evidence about the lifespan of these code smells in the history of mobile apps. In this paper, we present the first large-scale empirical study that investigates the survival of Android code smells. This study covers 8 types of Android code smells, 324 Android apps, 255k commits, and the history of 180k code smell instances. Our study reports that while in terms of time Android code smells can remain in the codebase for years before being removed, it only takes 34 effective commits to remove 75% of them. Also, Android code smells disappear faster in bigger projects with higher releasing trends. Finally, we observed that code smells that are detected and prioritised by linters tend to disappear before other code smells

    Detailed Overview of Software Smells

    Get PDF
    This document provides an overview of literature concerning software smells covering various dimensions of smells along with their corresponding references

    Internal quality evolution of a large test system–an industrial study

    Get PDF
    This paper presents our empirical observations related to the evolution of a large automated test system. The system observed is used in the industry as a test tool for complex telecommunication systems, itself consisting of more than one million lines of source code. This study evaluates how different changes during the development have changed the number of observed Code Smells in the test system. We have monitored the development of the test scripts and measured the code quality characteristics over a five years period

    Code Smell Detection Techniques and Process: A Review

    Get PDF
    A code smell is a hint that something has turned out badly some place in your code. The idea of code smells was introduced to characterize various different types of design shortcomings in code. Code and design smells are poor solutions to recurring implementation and design problems. They may hinder the evolution of a system by making it hard for software engineers to carry out changes. In this paper, we reviewed code smell detection tool like: D�cor, InFusion, JDeodorant, PMD, Stench Blossom, etc. Furthermore, we discussed various code smells detecting techniques. Code clones are indistinguishable fragment of source code which may be embedded deliberately or inadvertently. Reusing code pieces through reordering with or without minor adjustments is general undertaking in programming advancement. We�ve examined several papers to explore various tools and techniques used for code smell. In addition, we reviewed the process of code smell detection

    Mining Version Histories for Detecting Code Smells

    Get PDF
    Code smells are symptoms of poor design and implementation choices that may hinder code comprehension, and possibly increase change-and fault-proneness. While most of the detection techniques just rely on structural information, many code smells are intrinsically characterized by how code elements change over time. In this paper, we propose Historical Information for Smell deTection (HIST), an approach exploiting change history information to detect instances of five different code smells, namely Divergent Change, Shotgun Surgery, Parallel Inheritance, Blob, and Feature Envy. We evaluate HIST in two empirical studies. The first, conducted on 20 open source projects, aimed at assessing the accuracy of HIST in detecting instances of the code smells mentioned above. The results indicate that the precision of HIST ranges between 72 and 86 percent, and its recall ranges between 58 and 100 percent. Also, results of the first study indicate that HIST is able to identify code smells that cannot be identified by competitive approaches solely based on code analysis of a single system\u27s snapshot. Then, we conducted a second study aimed at investigating to what extent the code smells detected by HIST (and by competitive code analysis techniques) reflect developers\u27 perception of poor design and implementation choices. We involved 12 developers of four open source projects that recognized more than 75 percent of the code smell instances identified by HIST as actual design/implementation problems

    Mining Version Histories for Detecting Code Smells

    Get PDF
    Code smells are symptoms of poor design and implementation choices that may hinder code comprehension, and possibly increase change-and fault-proneness. While most of the detection techniques just rely on structural information, many code smells are intrinsically characterized by how code elements change over time. In this paper, we propose Historical Information for Smell deTection (HIST), an approach exploiting change history information to detect instances of five different code smells, namely Divergent Change, Shotgun Surgery, Parallel Inheritance, Blob, and Feature Envy. We evaluate HIST in two empirical studies. The first, conducted on 20 open source projects, aimed at assessing the accuracy of HIST in detecting instances of the code smells mentioned above. The results indicate that the precision of HIST ranges between 72 and 86 percent, and its recall ranges between 58 and 100 percent. Also, results of the first study indicate that HIST is able to identify code smells that cannot be identified by competitive approaches solely based on code analysis of a single system\u27s snapshot. Then, we conducted a second study aimed at investigating to what extent the code smells detected by HIST (and by competitive code analysis techniques) reflect developers\u27 perception of poor design and implementation choices. We involved 12 developers of four open source projects that recognized more than 75 percent of the code smell instances identified by HIST as actual design/implementation problems
    • …
    corecore