5 research outputs found

    A New Calibration for Function Point Complexity Weights

    Get PDF
    Function Point (FP) is a useful software metric that was first proposed twenty-five years ago, since then, it has steadily evolved into a functional size metric consolidated in the well-accepted Standardized International Function Point Users Group (IFPUG) Counting Practices Manual - version 4.2. While software development industry has grown rapidly, the weight values assigned to count standard FP still remain same, which raise critical questions about the validity of the weight values. In this paper, we discuss the concepts of calibrating Function Point, whose aims are to estimate a more accurate software size that fits for specific software application, to reflect software industry trend, and to improve the cost estimation of software projects. A FP calibration model called Neuro-Fuzzy Function Point Calibration Model (NFFPCM) that integrates the learning ability from neural network and the ability to capture human knowledge from fuzzy logic is proposed. The empirical validation using International Software Benchmarking Standards Group (ISBSG) data repository release 8 shows a 22% accuracy improvement of mean MRE in software effort estimation after calibration

    Predicting software Size and Development Effort: Models Based on Stepwise Refinement

    Get PDF
    This study designed a Software Size Model and an Effort Prediction Model, then performed an empirical analysis of these two models. Each model design began with identifying its objectives, which led to describing the concept to be measured and the meta-model. The numerical assignment rules were then developed, providing a basis for size measurement and effort prediction across software engineering projects. The Software Size Model was designed to test the hypothesis that a software size measure represents the amount of knowledge acquired and stored in software artifacts, and the amount of time it took to acquire and store this knowledge. The Effort Prediction Model is based on the estimation by analogy approach and was designed to test the hypothesis that this model will produce reasonably close predictions when it uses historical data that conforms to the Software Size Model. The empirical study implemented each model, collected and recorded software size data from software engineering project deliverables, simulated effort prediction using the jack knife approach, and computed the absolute relative error and magnitude of relative error (MRE) statistics. This study resulted in 35.3% of the predictions having an MRE value at or below twenty-five percent. This result satisfies the criteria established for the study of having at least 31 % of the predictions with a MRE of25% or less. This study is significant for three reasons. First, no subjective factors were used to estimate effort. The elimination of subjective factors removes a source of error in the predictions and makes the study easier to replicate. Second, both models were described using metrology and measurement theory principles. This allows others to consistently implement the models and to modify these models while maintaining the integrity of the models\u27 objectives. Third, the study\u27s hypotheses were validated even though the software artifacts used to collect the software size data varied significantly in both content and quality. Recommendations for further study include applying the Software Size Model to other data-driven estimation models, collecting and using software size data from industry projects, looking at alternatives for how text-based software knowledge is identified and counted, and studying the impact of project cycles and project roles on predicting effort
    corecore