17 research outputs found

    The Quamoco product quality modelling and assessment approach

    Get PDF
    Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C# systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice

    rust-code-analysis: A Rust library to analyze and extract maintainability information from source codes

    Get PDF
    The literature proposes many software metrics for evaluating the source code non-functional properties, such as its complexity and maintainability. The literature also proposes several tools to compute those properties on source codes developed with many different software languages. However, the Rust language emergence has not been paired by the community’s effort in developing parsers and tools able to compute metrics for the Rust source code. Also, metrics tools often fall short in providing immediate means of comparing maintainability metrics between different algorithms or coding languages. We hence introduce rust-code-analysis, a Rust library that allows the extraction of a set of eleven maintainability metrics for ten different languages, including Rust. rust-code-analysis, through the Abstract Syntax Tree (AST) of a source file, allows the inspection of the code structure, analyzing source code metrics at different levels of granularity, and finding code syntax errors before compiling time. The tool also offers a command-line interface that allows exporting the results in different formats. The possibility of analyzing source codes written in different programming languages enables simple and systematic comparisons between the metrics produced from different empirical and large-scale analysis sources

    Are comprehensive quality models necessary for evaluating software quality?

    Get PDF
    The concept of software quality is very complex and has many facets. Reflecting all these facets and at the same time measuring everything related to these facets results in comprehensive but large quality models and extensive measurements. In contrast, there are also many smaller, focused quality models claiming to evaluate quality with few measures. We investigate if and to what extent it is possible to build a focused quality model with similar evaluation results as a comprehensive quality model but with far less measures needed to be collected and, hence, reduced effort. We make quality evaluations with the comprehensive Quamoco base quality model and build focused quality models based on the same set of measures and data from over 2,000 open source systems. We analyse the ability of the focused model to predict the results of the Quamoco model by comparing them with a random predictor as a baseline. We calculate the standardised accuracy measure SA and effect sizes. We found that for the Quamoco model and its 378 automatically collected measures, we can build a focused model with only 10 measures but an accuracy of 61% and a medium to high effect size. We conclude that we can build focused quality models to get an impression of a system’s quality similar to comprehensive models. However, when including manually collected measures, the accuracy of the models stayed below 50%. Hence, manual measures seem to have a high impact and should therefore not be ignored in a focused model

    Maintainability of classes in terms of bug prediction

    Get PDF
    Measuring software product maintainability is a central issue in software engineering which led to a number of different practical quality models. Besides system level assessments it is also desirable that these models provide technical quality information at source code element level (e.g. classes, methods) to aid the improvement of the software. Although many existing models give an ordered list of source code elements that should be improved, it is unclear how these elements are affected by other important quality indicators of the system, e.g. bug density. In this paper we empirically investigate the bug prediction capabilities of the class level maintainability measures of our ColumbusQM probabilistic quality model using open-access PROMSIE bug dataset. We show that in terms of correctness and completeness, ColumbusQM competes with statistical and machine learning prediction models especially trained on the bug data using product metrics as predictors. This is a great achievement in the light of that our model needs no training and its purpose is different (e.g. to estimate testability, or development costs) than those of the bug prediction models

    Quality Properties of Execution Tracing, an Empirical Study

    Get PDF
    The authors are grateful to all the professionals who participated in the focus groups; moreover, they also express special thanks to the management of the companies involved for making the organisation of the focus groups possible.Data are made available in the appendix including the results of the data coding process.The quality of execution tracing impacts the time to a great extent to locate errors in software components; moreover, execution tracing is the most suitable tool, in the majority of the cases, for doing postmortem analysis of failures in the field. Nevertheless, software product quality models do not adequately consider execution tracing quality at present neither do they define the quality properties of this important entity in an acceptable manner. Defining these quality properties would be the first step towards creating a quality model for execution tracing. The current research fills this gap by identifying and defining the variables, i.e., the quality properties, on the basis of which the quality of execution tracing can be judged. The present study analyses the experiences of software professionals in focus groups at multinational companies, and also scrutinises the literature to elicit the mentioned quality properties. Moreover, the present study also contributes to knowledge with the combination of methods while computing the saturation point for determining the number of the necessary focus groups. Furthermore, to pay special attention to validity, in addition to the the indicators of qualitative research: credibility, transferability, dependability, and confirmability, the authors also considered content, construct, internal and external validity

    Software Product Quality Models, Developments, Trends and Evaluation

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Software product quality models have evolved in their abilities to capture and describe the abstract notion of software quality since the 1970’s. Many models constructed deal with a specific part of software quality only which makes them ineligible to assess the quality of software products as a whole. Former publications failed to thoroughly examine and list all the available models which attempt to describe each known property of software product quality. This paper discovers such complete software product quality models published since 2000; moreover, it endeavours to measure the relevance of each model quantitatively by introducing indicators with regard to the scientific and industrial community. The identified 23 software product quality model classes differ significantly in terms of publication intensity, publication range, quality score average, relevance score and the 12-month average of the Google Relative Search Index. The results offer a foundation for selecting the appropriate software product quality model for use or for extension if newly identified quality properties need to be connected to a general context. Furthermore, the experiences accumulated on the field of software product quality modelling motivated researchers to successfully transfer the concepts to other areas where abstract entities need to be compared or assessed including the quality of higher educational teaching and business processes, which is also briefly highlighted in the paper
    corecore