33 research outputs found

    Analysis of the influence to productivity of software corrective maintenance using an economic model

    Full text link
    Š 2014 IEEE. This paper presents an economic model for productivity of software corrective maintenance. The productivity is modeled using economic value of the maintenance process as the output, and the pre-committed fixed cost and variable cost as input. The relationship of the economic value and these cost components are modeled using analytical theory of investment. The values of corrective maintenance process are analyzed. A simulation approach is demonstrated to analyze the influences to the productivity in corrective maintenance. This approach provides a tool to identify and analyze the optimal parameters in productivity using the economic model and simulation

    Analyzing the Non-Functional Requirements in the Desharnais Dataset for Software Effort Estimation

    Get PDF
    Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R2). Results show that the quality attribute “Language” is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled

    Estimating Productivity of Software Development Using the Total Factor Productivity Approach

    Get PDF
    The design, control and optimization of software engineering processes generally require the determination of performance measures such as efficiency or productivity. However, the definition and measurement of productivity is often inaccurate and differs from one method to another. On the other hand, economic theory offers a well‐grounded tool of productivity measurement. In this article, we propose a model of process productivity measurement based on the total factor productivity (TFP) approach commonly used in economics. In the first part of the article, we define productivity and its measurement. We also discuss the major data issues which have to be taken into consideration. Consequently, we apply the TFP approach in the domain of software engineering and we propose a TFP model of productivity assessment

    Reliability and validity in comparative studies of software prediction models

    Get PDF
    Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models

    Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy

    Get PDF
    Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples

    Analytical model of information system development productivity in adaptive and perfective maintenance phases

    Full text link
    Copyright Š 2018 Inderscience Enterprises Ltd. This paper presents an analytical model for information system (IS) maintenance productivity in adaptive and perfective phases. The modelling approach is from economic view point. The productivity model considers the economic value of the maintenance phase, pre-committed fixed cost and variable cost consumed in adaptive/perfective maintenance. Influence factors to the productivity are analysed using simulation. The simulation provides a tool for IS project managers to tune the project parameters to obtain the optimal productivity in adaptive/perfective maintenance phases

    Identifying Effort Estimation Factors for Corrective Maintenance in Object-Oriented Systems

    Get PDF
    This research explores the decision-making process of expert estimators of corrective maintenance projects by usingqualitative methods to identify the factors that they use in deriving estimates. We implement a technique called causalmapping, which allows us to identify the cognitive links between the information that estimators use, and the estimates thatthey produce based on that information. Results suggest that a total of 17 factors may be relevant for corrective maintenanceeffort estimation, covering constructs related to developers, code, defects, and environment. This line of research aims ataddressing the limitations of existing maintenance estimation models that do not incorporate a number of soft factors, thus,achieving less accurate estimates than human experts

    CSM-424- Evolutionary Complexity: Investigations into Software Flexibility

    Get PDF
    Flexibility has been hailed as a desirable quality since the earliest days of software engineering. Classic and modern literature suggest that particular programming paradigms, architectural styles and design patterns are more “flexible” than others but stop short of suggesting objective criteria for measuring such claims. We suggest that flexibility can be measured by applying notions of measurement from computational complexity to the software evolution process. We define evolution complexity (EC) metrics, and demonstrate that— (a) EC can be used to establish informal claims on software flexibility; (b) EC can be constant ����or linear� in the size of the change; (c) EC can be used to choose the most flexible software design policy. We describe a small-scale experiment designed to test these claims
    corecore