33 research outputs found
Recommended from our members
New ideas and emerging research: evaluating prediction system accuracy
BACKGROUND: Prediction e.g. of project cost is an important concern in software engineering. PROBLEM: Although many empirical validations of software engineering prediction systems have been published, no one approach dominates and sense-making of conflicting empirical results is proving challenging. METHOD: We propose a new approach to evaluating competing prediction systems based upon an unbiased statistic (Standardised Accuracy), analysis of results relative to the baseline technique of guessing and calculation of effect sizes. RESULTS: Two empirical studies are revisited and the published results are shown to be misleading when re-analysed using our new approach. CONCLUSION: Biased statistics such as MMRE are deprecated. By contrast our approach leads to valid results. Such steps will greatly assist in performing future meta-analyses
Analysis of the influence to productivity of software corrective maintenance using an economic model
Š 2014 IEEE. This paper presents an economic model for productivity of software corrective maintenance. The productivity is modeled using economic value of the maintenance process as the output, and the pre-committed fixed cost and variable cost as input. The relationship of the economic value and these cost components are modeled using analytical theory of investment. The values of corrective maintenance process are analyzed. A simulation approach is demonstrated to analyze the influences to the productivity in corrective maintenance. This approach provides a tool to identify and analyze the optimal parameters in productivity using the economic model and simulation
Analyzing the Non-Functional Requirements in the Desharnais Dataset for Software Effort Estimation
Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R2). Results show that the quality attribute âLanguageâ is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled
Estimating Productivity of Software Development Using the Total Factor Productivity Approach
The design, control and optimization of software engineering processes generally require the determination of performance measures such as efficiency or productivity. However, the definition and measurement of productivity is often inaccurate and differs from one method to another. On the other hand, economic theory offers a wellâgrounded tool of productivity measurement. In this article, we propose a model of process productivity measurement based on the total factor productivity (TFP) approach commonly used in economics. In the first part of the article, we define productivity and its measurement. We also discuss the major data issues which have to be taken into consideration. Consequently, we apply the TFP approach in the domain of software engineering and we propose a TFP model of productivity assessment
Reliability and validity in comparative studies of software prediction models
Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models
Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy
Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for âwidely used,â which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples
Analytical model of information system development productivity in adaptive and perfective maintenance phases
Copyright Š 2018 Inderscience Enterprises Ltd. This paper presents an analytical model for information system (IS) maintenance productivity in adaptive and perfective phases. The modelling approach is from economic view point. The productivity model considers the economic value of the maintenance phase, pre-committed fixed cost and variable cost consumed in adaptive/perfective maintenance. Influence factors to the productivity are analysed using simulation. The simulation provides a tool for IS project managers to tune the project parameters to obtain the optimal productivity in adaptive/perfective maintenance phases
Identifying Effort Estimation Factors for Corrective Maintenance in Object-Oriented Systems
This research explores the decision-making process of expert estimators of corrective maintenance projects by usingqualitative methods to identify the factors that they use in deriving estimates. We implement a technique called causalmapping, which allows us to identify the cognitive links between the information that estimators use, and the estimates thatthey produce based on that information. Results suggest that a total of 17 factors may be relevant for corrective maintenanceeffort estimation, covering constructs related to developers, code, defects, and environment. This line of research aims ataddressing the limitations of existing maintenance estimation models that do not incorporate a number of soft factors, thus,achieving less accurate estimates than human experts
CSM-424- Evolutionary Complexity: Investigations into Software Flexibility
Flexibility has been hailed as a desirable quality since the earliest days of software engineering. Classic and modern literature suggest that particular programming paradigms, architectural styles and design patterns are more âflexibleâ than others but stop short of suggesting objective criteria for measuring such claims.
We suggest that flexibility can be measured by applying notions of measurement from computational complexity to the software evolution process. We define evolution complexity (EC) metrics, and demonstrate thatâ
(a) EC can be used to establish informal claims on software flexibility;
(b) EC can be constant ����or linear� in the size of the change;
(c) EC can be used to choose the most flexible software design policy.
We describe a small-scale experiment designed to test these claims