105,356 research outputs found

    Application of mutual information-based sequential feature selection to ISBSG mixed data

    Full text link
    [EN] There is still little research work focused on feature selection (FS) techniques including both categorical and continuous features in Software Development Effort Estimation (SDEE) literature. This paper addresses the problem of selecting the most relevant features from ISBSG (International Software Benchmarking Standards Group) dataset to be used in SDEE. The aim is to show the usefulness of splitting the ranked list of features provided by a mutual information-based sequential FS approach in two, regarding categorical and continuous features. These lists are later recombined according to the accuracy of a case-based reasoning model. Thus, four FS algorithms are compared using a complete dataset with 621 projects and 12 features from ISBSG. On the one hand, two algorithms just consider the relevance, while the remaining two follow the criterion of maximizing relevance and also minimizing redundancy between any independent feature and the already selected features. On the other hand, the algorithms that do not discriminate between continuous and categorical features consider just one list, whereas those that differentiate them use two lists that are later combined. As a result, the algorithms that use two lists present better performance than those algorithms that use one list. Thus, it is meaningful to consider two different lists of features so that the categorical features may be selected more frequently. We also suggest promoting the usage of Application Group, Project Elapsed Time, and First Data Base System features with preference over the more frequently used Development Type, Language Type, and Development Platform.Fernández-Diego, M.; González-Ladrón-De-Guevara, F. (2018). Application of mutual information-based sequential feature selection to ISBSG mixed data. Software Quality Journal. 26(4):1299-1325. https://doi.org/10.1007/s11219-017-9391-5S12991325264Angelis, L., & Stamelos, I. (2000). A simulation tool for efficient analogy based cost estimation. Empirical Software Engineering, 5(1), 35–68. https://doi.org/10.1023/A:1009897800559 .Auer, M., Trendowicz, A., Graser, B., Haunschmid, E., & Biffl, S. (2006). Optimal project feature weights in analogy-based cost estimation: improvement and limitations. Software Engineering, IEEE Transactions on, 32(2), 83–92.Awada, W., Khoshgoftaar, T. M., Dittman, D., Wald, R., Napolitano, A. (2012). A review of the stability of feature selection techniques for bioinformatics data. In 2012 I.E. 13th International Conference on Information Reuse and Integration (IRI) (pp. 356–363). Presented at the 2012 I.E. 13th International Conference on Information Reuse and Integration (IRI). https://doi.org/10.1109/IRI.2012.6303031 .Battiti, R. (1994). Using mutual information for selecting features in supervised neural net learning. Neural Networks, IEEE Transactions, 5(4), 537–550.Bennasar, M., Hicks, Y., & Setchi, R. (2015). Feature selection using joint mutual information maximisation. Expert Systems with Applications, 42(22), 8520–8532. https://doi.org/10.1016/j.eswa.2015.07.007 .Bibi, S., Tsoumakas, G., Stamelos, I., & Vlahavas, I. (2008). Regression via classification applied on software defect estimation. Expert Systems with Applications, 34(3), 2091–2101. https://doi.org/10.1016/j.eswa.2007.02.012 .Chandrashekar, G., & Sahin, F. (2014). A survey on feature selection methods. Computers & Electrical Engineering, 40(1), 16–28.Chatzipetrou, P., Papatheocharous, E., Angelis, L., Andreou, A. S. (2012). An investigation of software effort phase distribution using compositional data analysis. In 2012 38th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA) (pp. 367–375). Presented at the 2012 38th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA). https://doi.org/10.1109/SEAA.2012.50 .Chen, Z., Menzies, T., Port, D., & Boehm, B. (2005). Feature subset selection can improve software cost estimation accuracy. In Proceedings of the 2005 workshop on predictor models in software engineering (pp. 1–6). New York: ACM. https://doi.org/10.1145/1082983.1083171 .Chiu, N.-H., & Huang, S.-J. (2007). The adjusted analogy-based software effort estimation based on similarity distances. Journal of Systems and Software, 80(4), 628–640.Dash, M., & Liu, H. (2003). Consistency-based search in feature selection. Artificial Intelligence, 151(1), 155–176.Dejaeger, K., Verbeke, W., Martens, D., & Baesens, B. (2012). Data mining techniques for software effort estimation: a comparative study. Software Engineering, IEEE Transactions on, 38(2), 375–397. https://doi.org/10.1109/TSE.2011.55 .Deng, K., & MacDonell, S. G. (2008). Maximising data retention from the ISBSG repository. In Proceedings of the 12th international conference on evaluation and assessment in software engineering (pp. 21–30). Swinton: British Computer Society http://dl.acm.org/citation.cfm?id=2227115.2227118 . Accessed 21 Jan 2014.Doquire, G., & Verleysen, M. (2011). An hybrid approach to feature selection for mixed categorical and continuous data. In International Conference on Knowledge Discovery and Information Retrieval. http://hdl.handle.net/2078.1/90765 . Accessed 2 Nov 2015.Dudani, S. A. (1976). The distance-weighted k-nearest-neighbor rule. IEEE Transactions on Systems, Man and Cybernetics, SMC, 6(4), 325–327. https://doi.org/10.1109/TSMC.1976.5408784 .Estévez, P. A., Tesmer, M., Perez, C. A., & Zurada, J. M. (2009). Normalized mutual information feature selection. IEEE Transactions on Neural Networks, 20(2), 189–201. https://doi.org/10.1109/TNN.2008.2005601 .Fayyad, U.M., & Irani, K.B. (1993). Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. In Proceedings of the International Joint Conference on Uncertainty in AI (pp. 1022–1027). Presented at the International Joint Conference on Uncertainty in AI. https://www.researchgate.net/publication/220815890_Multi-Interval_Discretization_of_Continuous-Valued_Attributes_for_Classification_Learning . Accessed 22 June 2016.Fernández-Diego, M., & González-Ladrón-de-Guevara, F. (2014). Potential and limitations of the ISBSG dataset in enhancing software engineering research: a mapping review. Information and Software Technology, 56(6), 527–544. https://doi.org/10.1016/j.infsof.2014.01.003 .Ferreira, A., & Figueiredo, M. (2011). Unsupervised joint feature discretization and selection. In J. Vitrià, J. M. Sanches, & M. Hernández (Eds.), Pattern recognition and image analysis (Vol. 6669, pp. 200–207). Berlin, Heidelberg: Springer Berlin Heidelberg http://link.springer.com/10.1007/978-3-642-21257-4_25 . Accessed 4 Mar 2016.Fleuret, F. (2004). Fast binary feature selection with conditional mutual information. Journal of Machine Learning Research, 5, 1531–1555.González-Ladrón-de-Guevara, F., Fernández-Diego, M., & Lokan, C. (2016). The usage of ISBSG data fields in software effort estimation: a systematic mapping study. Journal of Systems and Software, 113, 188–215. https://doi.org/10.1016/j.jss.2015.11.040 .Gupta, P., Jain, S., & Jain, A. (2014). A review of fast clustering-based feature subset selection algorithm. International Journal of Scientific & Technology Research, 3(11), 86–91.Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. The Journal of Machine Learning Research, 3, 1157–1182.Hall, M. A., & Holmes, G. (2003). Benchmarking attribute selection techniques for discrete class data mining. IEEE Transactions on Knowledge and Data Engineering, 15(6), 1437–1447. https://doi.org/10.1109/TKDE.2003.1245283 .Hausser, J., & Strimmer, K. (2009). Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. Journal of Machine Learning Research, 10(Jul), 1469–1484.Hill, P. (2010). Practical software project estimation: a toolkit for estimating software development effort & duration. McGraw Hill Professional.Hsu, H.-H., Hsieh, C.-W., & Lu, M.-D. (2011). Hybrid feature selection by combining filters and wrappers. Expert Systems with Applications, 38(7), 8144–8150.Huang, S.-J., & Chiu, N.-H. (2006). Optimization of analogy weights by genetic algorithm for software effort estimation. Information and Software Technology, 48(11), 1034–1045. https://doi.org/10.1016/j.infsof.2005.12.020 .Huang, S.-J., Chiu, N.-H., & Liu, Y.-J. (2008). A comparative evaluation on the accuracies of software effort estimates from clustered data. Information and Software Technology, 50(9–10), 879–888. https://doi.org/10.1016/j.infsof.2008.02.005 .Huang, J., Li, Y.-F., & Xie, M. (2015). An empirical analysis of data preprocessing for machine learning-based software cost estimation. Information and Software Technology, 67, 108–127. https://doi.org/10.1016/j.infsof.2015.07.004 .ISBSG. (2013a). ISBSG Dataset Release 12. ISBSG. http://isbsg.org/ . Accessed 1 Mar 2016.ISBSG. (2013b). ISBSG Guidelines Release 12.ISBSG. (2013c). ISBSG Data Demographics Release 12.Jeffery, R., Ruhe, M., Wieczorek, I. (2001). Using public domain metrics to estimate software development effort. In Software Metrics Symposium, 2001. METRICS 2001. Proceedings. Seventh International (pp. 16–27). https://doi.org/10.1109/METRIC.2001.915512 .Jiang, Z., & Comstock, C. (2007). The factors significant to software development productivity. In C. Ardil (Ed.), Proceedings of World Academy of Science, Engineering and Technology, Vol 19 (Vol. 19, pp. 160–164). Presented at the Conference of the World-Academy-of-Science-Engineering-and-Technology, Bangkok: World Acad Sci, Eng & Tech-Waset.Jørgensen, M., Indahl, U., & Sjøberg, D. (2003). Software effort estimation by analogy and ‘regression toward the mean’. Journal of Systems and Software, 68(3), 253–262. https://doi.org/10.1016/S0164-1212(03)00066-9 .Kabir, M. M., Shahjahan, M., & Murase, K. (2011). A new local search based hybrid genetic algorithm for feature selection. Neurocomputing, 74(17), 2914–2928.Kadoda, G., Cartwright, M., Chen, L., Shepperd, M. (2000). Experiences using case-based reasoning to predict software project effort. In EASE 2000 (pp. 2–3). Presented at the EASE 2000, Staffordshire, UK.Keung, J., Kocaguneli, E., & Menzies, T. (2012). Finding conclusion stability for selecting the best effort predictor in software effort estimation. Automated Software Engineering, 20(4), 543–567. https://doi.org/10.1007/s10515-012-0108-5 .Kirsopp, C., Shepperd, M. J., Hart, J. (2002). Search heuristics, case-based reasoning and software project effort prediction. In Proceedings of the Genetic and Evolutionary Computation Conference (pp. 9–13). New York, USA. http://v-scheiner.brunel.ac.uk/handle/2438/1554 . Accessed 27 Jan 2016.Kohavi, R., & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1–2), 273–324. https://doi.org/10.1016/S0004-3702(97)00043-X .Kwak, N., & Choi, C.-H. (2002). Input feature selection for classification problems. IEEE Transactions on Neural Networks, 13(1), 143–159. https://doi.org/10.1109/72.977291 .Langdon, W. B., Dolado, J., Sarro, F., & Harman, M. (2016). Exact mean absolute error of baseline predictor, MARP0. Information and Software Technology, 73, 16–18. https://doi.org/10.1016/j.infsof.2016.01.003 .Li, Y. F., Xie, M., & Goh, T. N. (2009). A study of mutual information based feature selection for case based reasoning in software cost estimation. Expert Systems with Applications, 36(3), 5921–5931.Liu, H., & Motoda, H. (2012). Feature selection for knowledge discovery and data mining (Vol. 454). Springer Science & Business Media. https://books.google.es/books?hl=en&lr=&id=aaDbBwAAQBAJ&oi=fnd&pg=PP10&dq=Feature+selection+for+knowledge+discovery+and+data+mining&ots=iuMhcWZGcf&sig=KlmNEIcsBdDVs-m1HUuICfpYZiM . Accessed 25 Jan 2016.Liu, H., & Yu, L. (2005). Toward integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering, 17(4), 491–502. https://doi.org/10.1109/TKDE.2005.66 .Liu, H., Wei, R., & Jiang, G. (2013). A hybrid feature selection scheme for mixed attributes data. Computational and Applied Mathematics, 32(1), 145–161. https://doi.org/10.1007/s40314-013-0019-5 .Liu, Q., Wang, J., Xiao, J., Zhu, H. (2014). Mutual information based feature selection for symbolic interval data. In International Conference on Software Intelligence Technologies and Applications International Conference on Frontiers of Internet of Things 2014 (pp. 62–69). Presented at the International Conference on Software Intelligence Technologies and Applications International Conference on Frontiers of Internet of Things 2014. https://doi.org/10.1049/cp.2014.1537 .Lokan, C. (2005). What should you optimize when building an estimation model? In Software Metrics, 2005. 11th IEEE International Symposium (pp. 1–10). https://doi.org/10.1109/METRICS.2005.55 .Lokan, C., & Mendes, E. (2009a). Investigating the use of chronological split for software effort estimation. Software, IET, 3(5), 422–434. https://doi.org/10.1049/iet-sen.2008.0107 .Lokan, C., & Mendes, E. (2009b). Applying moving windows to software effort estimation. In Proceedings of the 2009 3rd international symposium on empirical software engineering and measurement (pp. 111–122). Washington, DC: IEEE Computer Society. https://doi.org/10.1109/ESEM.2009.5316019 .Lokan, C., & Mendes, E. (2012). Investigating the use of duration-based moving windows to improve software effort prediction. In Software Engineering Conference (APSEC), 2012 19th Asia-Pacific (Vol. 1, pp. 818–827). Presented at the Software Engineering Conference (APSEC), 2012 19th Asia-Pacific. https://doi.org/10.1109/APSEC.2012.74 .Lustgarten, J.L., Visweswaran, S., Grover, H., Gopalakrishnan, V. (2008). An evaluation of discretization methods for learning rules from biomedical datasets. In BIOCOMP (pp. 527–532).Mandal, M., & Mukhopadhyay, A. (2013). An improved minimum redundancy maximum relevance approach for feature selection in gene expression data. Procedia Technology, 10, 20–27. https://doi.org/10.1016/j.protcy.2013.12.332 .Mendes, E., Watson, I., Triggs, C., Mosley, N., & Counsell, S. (2003). A comparative study of cost estimation models for web hypermedia applications. Empirical Software Engineering, 8(2), 163–196.Mendes, E., Lokan, C., Harrison, R., Triggs, C. (2005). A replicated comparison of cross-company and within-company effort estimation models using the ISBSG database. In Software Metrics, 2005. 11th IEEE International Symposium (pp. 1–10). https://doi.org/10.1109/METRICS.2005.4 .Moses, J., Farrow, M., Parrington, N., & Smith, P. (2006). A productivity benchmarking case study using Bayesian credible intervals. Software Quality Journal, 14(1), 37–52. https://doi.org/10.1007/s11219-006-6000-4 .Núñez, H., Sànchez-Marrè, M., Cortés, U., Comas, J., Martínez, M., Rodríguez-Roda, I., & Poch, M. (2004). A comparative study on the use of similarity measures in case-based reasoning to improve the classification of environmental system situations. Environmental Modelling & Software, 19(9), 809–819. https://doi.org/10.1016/j.envsoft.2003.03.003 .Oh, I.-S., Lee, J.-S., & Moon, B.-R. (2004). Hybrid genetic algorithms for feature selection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(11), 1424–1437.Peng, H., Long, F., & Ding, C. (2005). Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8), 1226–1238. https://doi.org/10.1109/TPAMI.2005.159 .R Core Team. (2015). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing https://www.R-project.org/ .Romanski, P., & Kotthoff, L. (2014). FSelector: Selecting attributes. R package version 0.20. https://CRAN.R-project.org/package=FSelector .Shannon, C. E. (1949). The mathematical theory of communication. Urbana: University of Illinois Press.Shepperd, M., & MacDonell, S. (2012). Evaluating prediction systems in software project estimation. Information and Software Technology, 54(8), 820–827.Shepperd, M., & Schofield, C. (1997). Estimating software project effort using analogies. Software Engineering, IEEE Transactions on, 23(11), 736–743.Somol, P., Pudil, P., & Kittler, J. (2004). Fast branch & bound algorithms for optimal feature selection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(7), 900–912.Song, Q., & Shepperd, M. (2007). A new imputation method for small software project data sets. Journal of Systems and Software, 80(1), 51–62.Top, O. O., Ozkan, B., Nabi, M., Demirors, O. (2011). Internal and External Software Benchmark Repository Utilization for Effort Estimation. In Software Measurement, 2011 Joint Conference of the 21st Int’l Workshop on and 6th Int’l Conference on Software Process and Product Measurement (IWSM-MENSURA) (pp. 302–307). https://doi.org/10.1109/IWSM-MENSURA.2011.41 .Vinh, L.T., Thang, N.D., Lee, Y.-K. (2010). An improved maximum relevance and minimum redundancy feature selection algorithm based on normalized mutual information. In 2010 10th IEEE/IPSJ International Symposium on Applications and the Internet (SAINT) (pp. 395–398). Presented at the 2010 10th IEEE/IPSJ International Symposium on Applications and the Internet (SAINT). https://doi.org/10.1109/SAINT.2010.50 .Witten, I.H., Frank, E., Hall, M.A., Pal, C.J. (2011). Data mining: Practical machine learning tools and techniques. Morgan Kaufmann

    An Empirical investigation into software effort estimation by analogy

    Get PDF
    Most practitioners recognise the important part accurate estimates of development effort play in the successful management of major software projects. However, it is widely recognised that current estimation techniques are often very inaccurate, while studies (Heemstra 1992; Lederer and Prasad 1993) have shown that effort estimation research is not being effectively transferred from the research domain into practical application. Traditionally, research has been almost exclusively focused on the advancement of algorithmic models (e.g. COCOMO (Boehm 1981) and SLIM (Putnam 1978)), where effort is commonly expressed as a function of system size. However, in recent years there has been a discernible movement away from algorithmic models with non-algorithmic systems (often encompassing machine learning facets) being actively researched. This is potentially a very exciting and important time in this field, with new approaches regularly being proposed. One such technique, estimation by analogy, is the focus of this thesis. The principle behind estimation by analogy is that past experience can often provide insights and solutions to present problems. Software projects are characterised in terms of collectable features (such as the number of screens or the size of the functional requirements) and stored in a historical case base as they are completed. Once a case base of sufficient size has been cultivated, new projects can be estimated by finding similar historical projects and re-using the recorded effort. To make estimation by analogy feasible it became necessary to construct a software tool, dubbed ANGEL, which allowed the collection of historical project data and the generation of estimates for new software projects. A substantial empirical validation of the approach was made encompassing approximately 250 real historical software projects across eight industrial data sets, using stepwise regression as a benchmark. Significance tests on the results accepted the hypothesis (at the 1% confidence level) that estimation by analogy is a superior prediction system to stepwise regression in terms of accuracy. A study was also made of the sensitivity of the analogy approach. By growing project data sets in a pseudo time-series fashion it was possible to answer pertinent questions about the approach, such as, what are the effects of outlying projects and what is the minimum data set size? The main conclusions of this work are that estimation by analogy is a viable estimation technique that would seem to offer some advantages over algorithmic approaches including, improved accuracy, easier use of categorical features and an ability to operate even where no statistical relationships can be found

    On the application of artificial intelligence and human computation to the automation of agile software task effort estimation

    Get PDF
    Software effort estimation (SEE), as part of the wider project planning and product road mapping process, occurs throughout a software development life cycle. A variety of effort estimation methods have been proposed in the literature, including algorithmic methods, expert based methods, and more recently, methods based on techniques drawn from machine learning and natural language processing. In general, the consensus in the literature is that expert-based methods such as Planning Poker are more reliable than automated effort estimation. However, these methods are labour intensive and difficult to scale to large-scale projects. To address this limitation, this thesis investigates the feasibility of using human computation techniques to coordinate crowds of inexpert workers to predict expert-comparable effort estimates for a given software development task. The research followed an empirical methodology and used four different methods: literature review, replication, a series of laboratory experiments, and ethnography. The literature uncovered the lack of suitable datasets that include the attributes of descriptive text (corpus), actual cost, and expert estimates for a given software development task. Thus, a new dataset was developed to meet the necessary requirements. Next, effort estimation based on recent natural language processing advancements was evaluated and compared with expert estimates. The results suggest that there was no significant improvement, and the automated approach was still outperformed by expert estimates. Therefore, the feasibility of scaling the Planning Poker effort estimation method by using human computation in a micro-task crowdsourcing environment was explored. A series of pilot experiments were conducted to find the proper design for adapting Planning Poker to a crowd environment. This resulted in designing a new estimation method called Crowd Planning Poker (CPP). The pilot experiments revealed that a significant proportion of the crowd submitted poor quality assignments. Therefore, an approach to actively managing the quality of SEE work was proposed and evaluated before being integrated into the CPP method. A substantial overall evaluation was then conducted. The results demonstrated that crowd workers were able to discriminate between tasks of varying complexity and produce estimates that were comparable with those of experts and at substantially reduced cost compared with small teams of domain experts. It was further noted in the experiments that crowd workers provide useful insights as to the resolution of the task. Therefore, as a final step, fine-grained details about crowd workers’ behaviour, including actions taken and artifacts reviewed, were used in an ethnographic study to understand how crowd effort estimation takes place in a crowd. Four persona archetypes were developed to describe the crowd behaviours, and the results of the behaviour analysis were confirmed by surveying the crowd workers

    Effort Estimation Methods in Software Development Using Machine Learning Algorithms

    Get PDF
    Estimation of effort for the proposed software is a standout amongst the most essential activities in project management. Proper estimation of effort is often desirable in order to avoid any sort of failures in a project and is the practice to adopted by developers at the very beginning stage of the software development life cycle. Estimating the effort and schedule with a higher accuracy is a challenge that attracts attention of researchers as well as practitioners. Predicting the effort required to develop a software to a certain level of accuracy is definitely a difficult assignment for a manager or system analyst, when the requirements are not very clearly identified. Effort estimation helps project managers to determine time and effort required for the successful completion of the project. In order to help the organization in developing qualitative products within a planned time frame, the job of appropriate software effort estimation is of primary requirement. For measuring the cost and effort of software development, traditional software estimation techniques like Constructive Cost Estimation (COCOMO) model and Function Point Analysis (FPA) have not been proved very much satisfactory, because of uncertainties associated with parameters such as Line Of Code (LOC) and Function Point (FP) respectively, used for procedural programming concept. The procedural oriented design splits the data and procedure, whereas accepted practice of present day i.e., the object-oriented design combines both of them Since class and use case are the basic logical units of an object-oriented system, the use of Class Point (CP) and Use Case Point (UCP) approach to estimate the project effort helps to get more accurate result. For projects based on the aspect of Web Engineering, effort estimation practice is identified as a critical issue Considering these facts, there is a strong need for formal estimation of web-based projects, which can be accomplished by the help of International Software Benchmarking Standards Group (ISBSG) dataset. Similarly, in case of agile projects, Story Point Approach (SPA) is used to measure the effort required to implement a user story. By adding up the estimates of user stories which were nished during an iteration (story point iteration), the project velocity is obtained. The dataset related to CP, UCP and SPA are collected from previous projects mentioned in few research articles or from industries in order to assess the results. In order to create results of estimation with more accuracy, when managing issues of complex connections in the middle of inputs as well as yields, and where, there is a distortion in the inputs by high noise levels, the application of machine learning (ML) techniques helps to bring out results with more accuracy. A number of past research studies indicate that no single technique turns out to be the best for all cases. This is because of the dependency of system's execution altogether on the predicted function types, variations in properties of collected data, number of tests, noise ratio and so on. Hence the use of ML techniques in order to cope with issues arises in real-life situation is considered to be worthwhile. The research work carried out here presents the use of various ML techniques for software effort estimation using CP, UCP, Web-based and SPA approaches. The ML techniques are implemented taking into consideration of related dataset to predict the required effort

    Software project economics: A roadmap

    Get PDF
    The objective of this paper is to consider research progress in the field of software project economics with a view to identifying important challenges and promising research directions. I argue that this is an important sub-discipline since this will underpin any cost-benefit analysis used to justify the resourcing, or otherwise, of a software project. To accomplish this I conducted a bibliometric analysis of peer reviewed research articles to identify major areas of activity. My results indicate that the primary goal of more accurate cost prediction systems remains largely unachieved. However, there are a number of new and promising avenues of research including: how we can combine results from primary studies, integration of multiple predictions and applying greater emphasis upon the human aspects of prediction tasks. I conclude that the field is likely to remain very challenging due to the people-centric nature of software engineering, since it is in essence a design task. Nevertheless the need for good economic models will grow rather than diminish as software becomes increasingly ubiquitous

    Can k-NN imputation improve the performance of C4.5 with small software project data sets? A comparative evaluation

    Get PDF
    Missing data is a widespread problem that can affect the ability to use data to construct effective prediction systems. We investigate a common machine learning technique that can tolerate missing values, namely C4.5, to predict cost using six real world software project databases. We analyze the predictive performance after using the k-NN missing data imputation technique to see if it is better to tolerate missing data or to try to impute missing values and then apply the C4.5 algorithm. For the investigation, we simulated three missingness mechanisms, three missing data patterns, and five missing data percentages. We found that the k-NN imputation can improve the prediction accuracy of C4.5. At the same time, both C4.5 and k-NN are little affected by the missingness mechanism, but that the missing data pattern and the missing data percentage have a strong negative impact upon prediction (or imputation) accuracy particularly if the missing data percentage exceeds 40%

    Feature weighting techniques for CBR in software effort estimation studies: A review and empirical evaluation

    Get PDF
    Context : Software effort estimation is one of the most important activities in the software development process. Unfortunately, estimates are often substantially wrong. Numerous estimation methods have been proposed including Case-based Reasoning (CBR). In order to improve CBR estimation accuracy, many researchers have proposed feature weighting techniques (FWT). Objective: Our purpose is to systematically review the empirical evidence to determine whether FWT leads to improved predictions. In addition we evaluate these techniques from the perspectives of (i) approach (ii) strengths and weaknesses (iii) performance and (iv) experimental evaluation approach including the data sets used. Method: We conducted a systematic literature review of published, refereed primary studies on FWT (2000-2014). Results: We identified 19 relevant primary studies. These reported a range of different techniques. 17 out of 19 make benchmark comparisons with standard CBR and 16 out of 17 studies report improved accuracy. Using a one-sample sign test this positive impact is significant (p = 0:0003). Conclusion: The actionable conclusion from this study is that our review of all relevant empirical evidence supports the use of FWTs and we recommend that researchers and practitioners give serious consideration to their adoption
    • …
    corecore