42 research outputs found

    Towards Semantic Detection of Smells in Cloud Infrastructure Code

    Full text link
    Automated deployment and management of Cloud applications relies on descriptions of their deployment topologies, often referred to as Infrastructure Code. As the complexity of applications and their deployment models increases, developers inadvertently introduce software smells to such code specifications, for instance, violations of good coding practices, modular structure, and more. This paper presents a knowledge-driven approach enabling developers to identify the aforementioned smells in deployment descriptions. We detect smells with SPARQL-based rules over pattern-based OWL 2 knowledge graphs capturing deployment models. We show the feasibility of our approach with a prototype and three case studies.Comment: 5 pages, 6 figures. The 10 th International Conference on Web Intelligence, Mining and Semantics (WIMS 2020

    An Analytical Study of Code Smells

    Get PDF
    Software development process involves developing, building and enhancing high-quality software for specific tasks and as a consequence generates considerable amount of data. This data can be managed in a systematic manner creating knowledge repositories that can be used to competitive advantage. Lesson\u27s learned as part of the development process can also be part of the knowledge bank and can be used to advantage in subsequent projects by developers and software practitioners. Code smells are a group of symptoms which reveal that code is not good enough and requires some actions to have a cleansed code. Software metrics help to detect code smells while refactoring methods are used for removing them. Furthermore, various tools are applicable for detecting of code smells. A Code smell repository organizes all the available knowledge in the literature about code smells and related concepts. An analytical study of code smells is presented in this paper which extracts useful, actionable and indicative knowledge

    On the Feasibility of Transfer-learning Code Smells using Deep Learning

    Full text link
    Context: A substantial amount of work has been done to detect smells in source code using metrics-based and heuristics-based methods. Machine learning methods have been recently applied to detect source code smells; however, the current practices are considered far from mature. Objective: First, explore the feasibility of applying deep learning models to detect smells without extensive feature engineering, just by feeding the source code in tokenized form. Second, investigate the possibility of applying transfer-learning in the context of deep learning models for smell detection. Method: We use existing metric-based state-of-the-art methods for detecting three implementation smells and one design smell in C# code. Using these results as the annotated gold standard, we train smell detection models on three different deep learning architectures. These architectures use Convolution Neural Networks (CNNs) of one or two dimensions, or Recurrent Neural Networks (RNNs) as their principal hidden layers. For the first objective of our study, we perform training and evaluation on C# samples, whereas for the second objective, we train the models from C# code and evaluate the models over Java code samples. We perform the experiments with various combinations of hyper-parameters for each model. Results: We find it feasible to detect smells using deep learning methods. Our comparative experiments find that there is no clearly superior method between CNN-1D and CNN-2D. We also observe that performance of the deep learning models is smell-specific. Our transfer-learning experiments show that transfer-learning is definitely feasible for implementation smells with performance comparable to that of direct-learning. This work opens up a new paradigm to detect code smells by transfer-learning especially for the programming languages where the comprehensive code smell detection tools are not available

    On Understanding the Relation of Knowledge and Confidence to Requirements Quality

    Full text link
    Context and Motivation: Software requirements are affected by the knowledge and confidence of software engineers. Analyzing the interrelated impact of these factors is difficult because of the challenges of assessing knowledge and confidence. Question/Problem: This research aims to draw attention to the need for considering the interrelated effects of confidence and knowledge on requirements quality, which has not been addressed by previous publications. Principal ideas/results: For this purpose, the following steps have been taken: 1) requirements quality was defined based on the instructions provided by the ISO29148:2011 standard, 2) we selected the symptoms of low qualified requirements based on ISO29148:2011, 3) we analyzed five Software Requirements Specification (SRS) documents to find these symptoms, 3) people who have prepared the documents were categorized in four classes to specify the more/less knowledge and confidence they have regarding the symptoms, and 4) finally, the relation of lack of enough knowledge and confidence to symptoms of low quality was investigated. The results revealed that the simultaneous deficiency of confidence and knowledge has more negative effects in comparison with a deficiency of knowledge or confidence. Contribution: In brief, this study has achieved these results: 1) the realization that a combined lack of knowledge and confidence has a larger effect on requirements quality than only one of the two factors, 2) the relation between low qualified requirements and requirements engineers' needs for knowledge and confidence, and 3) variety of requirements engineers' needs for knowledge based on their abilities to make discriminative and consistent decisions.Comment: Preprint accepted for publication at the 27th International Working Conference on Requirement Engineering: Foundation for Software Qualit

    Experiences on Managing Technical Debt with Code Smells and AntiPatterns

    Get PDF
    Technical debt has become a common metaphor for the accumulation of software design and implementation choices that seek fast initial gains but that are under par and counterproductive in the long run. However, as a metaphor, technical debt does not offer actionable advice on how to get rid of it. To get to a practical level in solving problems, more focused mechanisms are needed. Commonly used approaches for this include identifying code smells as quick indications of possible problems in the codebase and detecting the presence of AntiPatterns that refer to overt, recurring problems in design. There are known remedies for both code smells and AntiPatterns. In paper, our goal is to show how to effectively use common tools and the existing body of knowledge on code smells and AntiPatterns to detect technical debt and pay it back. We present two main results: (i) How a combination of static code analysis and manual inspection was used to detect code smells in a codebase leading to the discovery of AntiPatterns; and (ii) How AntiPatterns were used to identify, characterize, and fix problems in the software. The experiences stem from a private company and its long-lasting software product development effort.Peer reviewe

    A Triple Bottom-line Typology of Technical Debt: Supporting Decision- Making in Cross-Functional Teams

    Get PDF
    Technical Debt (TD) is a widely discussed metaphor in IT practice focused on increased short-term benefit in exchange for long-term ‘debt’. While it is primarily individuals or groups inside IT departments who make the decisions to take on TD, we find that the effects of TD stretch across the entire organisation. Decisions to take on TD should therefore concern a wider group. However, business leaders have traditionally lacked awareness of the effects of what they perceive to be ‘technology decisions’. To facilitate TD as group- based decision-making, we review existing literature to develop a typology of the wider impacts of TD. The goal is to help technologists, non-technologists, and academics have a broader and shared understanding of TD and to facilitate more participatory and transparent technology-related decision making. We extend the typology to include a wider ‘outside in’ perspective and conclude by suggesting areas for further research

    Calidad en el desarrollo de software desde una perspectiva bibliométrica en el periodo 2018 – 2022

    Get PDF
    The research shows the bibliometric analysis of software engineering, a situation that arises from the need to understand how the development process has been evolving and how companies have been adopting, adapting, and standardizing it for the development of quality software. The objective of this work is to carry out a bibliometric study of the scientific production of quality-oriented software engineering, whose results will be the basis for future research. The methodology used was descriptive - retrospective, being the data collected from the Scopus database; For its collection and subsequent analysis, several parameters were used to define the inclusion and exclusion criteria, obtaining 144 resulting articles; in addition, multiple questions were raised that allowed visualizing the main trends through their descriptive analysis.  At the end of the process, the IEE Access journal with the most published articles was determined, Mkaouer Mohamed Wiem as the most relevant author on the subject, the United States as the country with the highest number of publications, among other indicators. In addition, the importance of this type of research and the preferences of researchers were evaluated, all evidenced by the number of publications that have been developed in the period 2018 – 2022.La investigación muestra el análisis bibliométrico de la ingeniería de software, situación que aparece de la necesidad de entender cómo ha ido evolucionando el proceso de desarrollo y cómo las empresas lo han ido adoptando, adaptando y estandarizando para la construcción de software de calidad. El objetivo del trabajo es realizar un estudio bibliométrico de la producción científica de la ingeniería de software orientada a la calidad, cuyos resultados sean base para futuras investigaciones. La metodología utilizada fue descriptiva – retrospectiva, siendo los datos recolectados a partir de la base de datos Scopus; para su obtención y posterior análisis, se utilizaron varios parámetros que definieron los criterios de inclusión y exclusión, obteniéndose 144 artículos resultantes; además, se plantearon múltiples interrogantes que permitieron visualizar las principales tendencias a través del análisis descriptivo de los mismos. Al final del proceso, se determinó la revista IEE Access con más artículos publicados, Mkaouer Mohamed Wiem como el autor más relevante en el tema, Estados Unidos como el país con el mayor número de publicaciones, entre otros indicadores. Además, se evaluó la importancia de este tipo de investigaciones y las preferencias de los investigadores, todo ello, evidenciado por el número de publicaciones que se han desarrollado en el periodo 2018 – 2022.

    A novel approach for code smell detection : an empirical study

    Get PDF
    Code smells detection helps in improving understandability and maintainability of software while reducing the chances of system failure. In this study, six machine learning algorithms have been applied to predict code smells. For this purpose, four code smell datasets (God-class, Data-class, Feature-envy, and Long-method) are considered which are generated from 74 open-source systems. To evaluate the performance of machine learning algorithms on these code smell datasets, 10-fold cross validation technique is applied that predicts the model by partitioning the original dataset into a training set to train the model and test set to evaluate it. Two feature selection techniques are applied to enhance our prediction accuracy. The Chi-squared and Wrapper-based feature selection techniques are used to improve the accuracy of total six machine learning methods by choosing the top metrics in each dataset. Results obtained by applying these two feature selection techniques are compared. To improve the accuracy of these algorithms, grid search-based parameter optimization technique is applied. In this study, 100% accuracy was obtained for the Long-method dataset by using the Logistic Regression algorithm with all features while the worst performance 95.20 % was obtained by Naive Bayes algorithm for the Long-method dataset using the chi-square feature selection technique.publishedVersio
    corecore