5 research outputs found

    Detailed Overview of Software Smells

    Get PDF
    This document provides an overview of literature concerning software smells covering various dimensions of smells along with their corresponding references

    Characterizing and Detecting Duplicate Logging Code Smells

    Get PDF
    Developers rely on software logs for a wide variety of tasks, such as debugging, testing, program comprehension, verification, and performance analysis. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this thesis, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers’ understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems: Hadoop, CloudStack, ElasticSearch, and Cassandra. We uncovered five patterns of duplicate logging code smells. For each instance of the code smell, we further manually identify the problematic (i.e., require fixes) and justifiable (i.e., do not require fixes) cases. Then, we contact developers in order to verify our manual study result. We integrated our manual study result and developers’ feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the four manually studied systems and four additional systems: Kafka, Flink, Camel and Wicket. In total, combining the results of DLFinder and our manual analysis, we reported 91 problematic code smell instances to developers and all of them have been fixed. This thesis provides an initial step on creating a logging guideline for developers to improve the quality of logging code. DLFinder is also able to detect duplicate logging code smells with high precision and recall

    Resource Allocation Modeling Framework to Refactor Software Design Smells

    Get PDF
    The domain to study design flaws in the software environment has created enough opportunity for the researchers. These design flaws i.e., code smells, were seen hindering the quality aspects of the software in many ways. Once detected, the segment of the software which was found to be infected with such a flaw has to be passed through some refactoring steps in order to remove it. To know about their working phenomenon in a better way, authors have innovatively talked about the smell detection mechanism using the NHPP modeling framework. Further the authors have also chosen to investigate about the amount of resources/efforts which should be allotted to various code smell categories. The authors have developed an optimization problem for the said purpose which is being validated on the real-life smell data set belonging to an open-source software system. The obtained results are in acceptable range and are justifying the applicability of the model

    Detection of embedded code smells in dynamic web applications

    No full text
    corecore