4 research outputs found

    A drill-down approach for measuring maintainability at source code element level

    Get PDF
    Measuring source code maintainability has always been a challenge for software engineers. To address this problem, a number of metrics-based quality models have been proposed by researchers. Besides expressing source code maintainability in terms of numerical values, these models are also expected to provide explicable results, i.e. to give a detailed list of source code fragments that should be improved in order to reach higher overall quality.In this paper, we propose a general method for drilling down to the root causes of a quality rating. According to our approach, a relative maintainability index can be calculated for each source code element for which metrics are calculated (e.g. methods, classes). The index value expresses the source code element's contribution to the overall quality rating.We empirically validated the method on the jEdit open source tool, by comparing the results with the opinions of software engineering students. The case study shows that there is a high, 0.68 Spearman's correlation, which suggests that relative maintainability indices assigned by our method express the subjective feelings of humans fairly well

    Case Study for the vudc R Package

    Get PDF
    In this study we present the usage of Cumulative Characteristic Diagram and Quantile Difference Diagram – implemented in the vudc R package – using the results of our research on the connection between version control history data and the related maintainability. With the help of these diagrams, we illustrate the results of five studies, in which we executed contingency Chi-Squared test, Wilcoxon rank tests and variance test. We were motivated by the question: how did these diagrams support the numeric results? We found that the diagrams spectacularly supported the results of the statistic tests, furthermore, they revealed other important connections which were left hidden by the tests

    Maintainability of classes in terms of bug prediction

    Get PDF
    Measuring software product maintainability is a central issue in software engineering which led to a number of different practical quality models. Besides system level assessments it is also desirable that these models provide technical quality information at source code element level (e.g. classes, methods) to aid the improvement of the software. Although many existing models give an ordered list of source code elements that should be improved, it is unclear how these elements are affected by other important quality indicators of the system, e.g. bug density. In this paper we empirically investigate the bug prediction capabilities of the class level maintainability measures of our ColumbusQM probabilistic quality model using open-access PROMSIE bug dataset. We show that in terms of correctness and completeness, ColumbusQM competes with statistical and machine learning prediction models especially trained on the bug data using product metrics as predictors. This is a great achievement in the light of that our model needs no training and its purpose is different (e.g. to estimate testability, or development costs) than those of the bug prediction models

    Annales Mathematicae et Informaticae (46.)

    Get PDF
    corecore