108 research outputs found

    Analyzing the Non-Functional Requirements in the Desharnais Dataset for Software Effort Estimation

    Get PDF
    Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R2). Results show that the quality attribute “Language” is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled

    A Review: Effort Estimation Model for Scrum Projects using Supervised Learning

    Get PDF
    Effort estimation practice in Agile is a critical component of the methodology to help cross-functional teams to plan and prioritize their work. Agile approaches have emerged in recent years as a more adaptable means of creating software projects because they consistently produce a workable end product that is developed progressively, preventing projects from failing entirely. Agile software development enables teams to collaborate directly with clients and swiftly adjust to changing requirements. This produces a result that is distinct, gradual, and targeted. It has been noted that the present Scrum estimate approach heavily relies on historical data from previous projects and expert opinion, while existing agile estimation methods like analogy and planning poker become unpredictable in the absence of historical data and experts. User Stories are used to estimate effort in the Agile approach, which has been adopted by 60–70% of the software businesses. This study's goal is to review a variety of strategies and techniques that will be used to gauge and forecast effort. Additionally, the supervised machine learning method most suited for predictive analysis is reviewed in this paper

    Explanatory and Causality Analysis in Software Engineering

    Get PDF
    Software fault proneness and software development efforts are two key areas of software engineering. Improving them will significantly reduce the cost and promote good planning and practice in developing and managing software projects. Traditionally, studies of software fault proneness and software development efforts were focused on analysis and prediction, which can help to answer questions like `when’ and `where’. The focus of this dissertation is on explanatory and causality studies that address questions like `why’ and `how’. First, we applied a case-control study to explain software fault proneness. We found that Bugfixes (Prerelease bugs), Developers, Code Churn, and Age of a file are the main contributors to the Postrelease bugs in some of the open-source projects. In terms of the interactions, we found that Bugfixes and Developers reduced the risk of post release software faults. The explanatory models were tested for prediction and their performance was either comparable or better than the top-performing classifiers used in related studies. Our results indicate that software project practitioners should pay more attention to the prerelease bug fixing process and the number of Developers assigned, as well as their interaction. Also, they need to pay more attention to the new files (less than one year old) which contributed significantly more to Postrelease bugs more than old files. Second, we built a model that explains and predicts multiple levels of software development effort and measured the effects of several metrics and their interactions using categorical regression models. The final models for the three data sets used were statistically fit, and performance was comparable to related studies. We found that project size, duration, the existence of any type of faults, the use of first- or second generation of programming languages, and team size significantly increased the software development effort. On the other side, the interactions between duration and defective project, and between duration and team size reduced the software development effort. These results suggest that software practitioners should pay extra attention to the time of the project and the team size assigned for every task because when they increased from a low to a higher level, they significantly increased the software development effort. Third, a structural equation modeling method was applied for causality analysis of software fault proneness. The method combined statistical and regression analysis to find the direct and indirect causes for software faults using partial least square path modeling method. We found direct and indirect paths from measurement models that led to software postrelease bugs. Specifically, the highest direct effect came from the change request, while changing the code had a minor impact on software faults. The highest impact of the code change resulted from the change requests (either for bug fixing or refactoring). Interestingly, the indirect impact from code characteristics to software fault proneness was higher than the direct impact. We found a similar level of direct and indirect impact from code characteristics to code change

    Review of Existing Datasets Used for Software Effort Estimation

    Get PDF
    The Software Effort Estimation (SEE) tool calculates an estimate of the amount of work that will be necessary to effectively finish the project. Managers usually want to know how hard a new project will be ahead of time so they can divide their limited resources in a fair way. In fact, it is common to use effort datasets to train a prediction model that can predict how much work a project will take. To train a good estimator, you need enough data, but most data owners don’t want to share their closed source project effort data because they are worried about privacy. This means that we can only get a small amount of effort data. The purpose of this research was to evaluate the quality of 15 datasets that have been widely utilized in studies of software project estimation. The analysis shows that most of the chosen studies use artificial neural networks (ANN) as ML models, NASA as datasets, and the mean magnitude of relative error (MMRE) as a measure of accuracy. In more cases, ANN and support vector machine (SVM) have done better than other ML techniques

    Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review

    Full text link
    Context The International Software Benchmarking Standards Group (ISBSG) maintains a software development repository with over 6000 software projects. This dataset makes it possible to estimate a project s size, effort, duration, and cost. Objective The aim of this study was to determine how and to what extent, ISBSG has been used by researchers from 2000, when the first papers were published, until June of 2012. Method A systematic mapping review was used as the research method, which was applied to over 129 papers obtained after the filtering process. Results The papers were published in 19 journals and 40 conferences. Thirty-five percent of the papers published between years 2000 and 2011 have received at least one citation in journals and only five papers have received six or more citations. Effort variable is the focus of 70.5% of the papers, 22.5% center their research in a variable different from effort and 7% do not consider any target variable. Additionally, in as many as 70.5% of papers, effort estimation is the research topic, followed by dataset properties (36.4%). The more frequent methods are Regression (61.2%), Machine Learning (35.7%), and Estimation by Analogy (22.5%). ISBSG is used as the only support in 55% of the papers while the remaining papers use complementary datasets. The ISBSG release 10 is used most frequently with 32 references. Finally, some benefits and drawbacks of the usage of ISBSG have been highlighted. Conclusion This work presents a snapshot of the existing usage of ISBSG in software development research. ISBSG offers a wealth of information regarding practices from a wide range of organizations, applications, and development types, which constitutes its main potential. However, a data preparation process is required before any analysis. Lastly, the potential of ISBSG to develop new research is also outlined.Fernández Diego, M.; González-Ladrón-De-Guevara, F. (2014). Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review. Information and Software Technology. 56(6):527-544. doi:10.1016/j.infsof.2014.01.003S52754456

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods \u2013like the IFPUG Function Point Analysis and COSMIC methods\u2013 are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications\u2018 sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Multifunctional optimized group method data handling for software effort estimation

    Get PDF
    Nowadays, the trend of significant effort estimations is in demand. Due to its popularity, the stakeholder needs effective and efficient software development processes with the best estimation and accuracy to suit all data types. Nevertheless, finding the best effort estimation model with good accuracy is hard to serve this purpose. Group Method of Data Handling (GMDH) algorithms have been widely used for modelling and identifying complex systems and potentially applied in software effort estimation. However, there is limited study to determine the best architecture and optimal weight coefficients of the transfer function for the GMDH model. This study aimed to propose a hybrid multifunctional GMDH with Artificial Bee Colony (GMDH-ABC) based on a combination of four individual GMDH models, namely, GMDH-Polynomial, GMDH-Sigmoid, GMDH-Radial Basis Function, and GMDH-Tangent. The best GMDH architecture is determined based on L9 Taguchi orthogonal array. Five datasets (i.e., Cocomo, Dershanais, Albrecht, Kemerer and ISBSG) were used to validate the proposed models. The missing values in the dataset are imputed by the developed MissForest Multiple imputation method (MFMI). The Mean Absolute Percentage Error (MAPE) was used as performance measurement. The result showed that the GMDH-ABC model outperformed the individual GMDH by more than 50% improvement compared to standard conventional GMDH models and the benchmark ANN model in all datasets. The Cocomo dataset improved by 49% compared to the conventional GMDH-LSM. Improvements of 71%, 63%, 67%, and 82% in accuracy were obtained for the Dershanis dataset, Albrecht dataset, Kemerer dataset, and ISBSG dataset, respectively, as compared with the conventional GMDH-LSM. The results indicated that the proposed GMDH-ABC model has the ability to achieve higher accuracy in software effort estimation

    Evaluating an automated procedure of machine learning parameter tuning for software effort estimation

    Get PDF
    Software effort estimation requires accurate prediction models. Machine learning algorithms have been used to create more accurate estimation models. However, these algorithms are sensitive to factors such as the choice of hyper-parameters. To reduce this sensitivity, automated approaches for hyper-parameter tuning have been recently investigated. There is a need for further research on the effectiveness of such approaches in the context of software effort estimation. These evaluations could help understand which hyper-parameter settings can be adjusted to improve model accuracy, and in which specific contexts tuning can benefit model performance. The goal of this work is to develop an automated procedure for machine learning hyper-parameter tuning in the context of software effort estimation. The automated procedure builds and evaluates software effort estimation models to determine the most accurate evaluation schemes. The methodology followed in this work consists of first performing a systematic mapping study to characterize existing hyper-parameter tuning approaches in software effort estimation, developing the procedure to automate the evaluation of hyper-parameter tuning, and conducting controlled quasi experiments to evaluate the automated procedure. From the systematic literature mapping we discovered that effort estimation literature has favored the use of grid search. The results we obtained in our quasi experiments demonstrated that fast, less exhaustive tuners were viable in place of grid search. These results indicate that randomly evaluating 60 hyper-parameters can be as good as grid search, and that multiple state-of-the-art tuners were only more effective than this random search in 6% of the evaluated dataset-model combinations. We endorse random search, genetic algorithms, flash, differential evolution, and tabu and harmony search as effective tuners.Los algoritmos de aprendizaje automático han sido utilizados para crear modelos con mayor precisión para la estimación del esfuerzo del desarrollo de software. Sin embargo, estos algoritmos son sensibles a factores, incluyendo la selección de hiper parámetros. Para reducir esto, se han investigado recientemente algoritmos de ajuste automático de hiper parámetros. Es necesario evaluar la efectividad de estos algoritmos en el contexto de estimación de esfuerzo. Estas evaluaciones podrían ayudar a entender qué hiper parámetros se pueden ajustar para mejorar los modelos, y en qué contextos esto ayuda el rendimiento de los modelos. El objetivo de este trabajo es desarrollar un procedimiento automatizado para el ajuste de hiper parámetros para algoritmos de aprendizaje automático aplicados a la estimación de esfuerzo del desarrollo de software. La metodología seguida en este trabajo consta de realizar un estudio de mapeo sistemático para caracterizar los algoritmos de ajuste existentes, desarrollar el procedimiento automatizado, y conducir cuasi experimentos controlados para evaluar este procedimiento. Mediante el mapeo sistemático descubrimos que la literatura en estimación de esfuerzo ha favorecido el uso de la búsqueda en cuadrícula. Los resultados obtenidos en nuestros cuasi experimentos demostraron que algoritmos de estimación no-exhaustivos son viables para la estimación de esfuerzo. Estos resultados indican que evaluar aleatoriamente 60 hiper parámetros puede ser tan efectivo como la búsqueda en cuadrícula, y que muchos de los métodos usados en el estado del arte son solo más efectivos que esta búsqueda aleatoria en 6% de los escenarios. Recomendamos el uso de la búsqueda aleatoria, algoritmos genéticos y similares, y la búsqueda tabú y harmónica.Escuela de Ciencias de la Computación e InformáticaCentro de Investigaciones en Tecnologías de la Información y ComunicaciónUCR::Vicerrectoría de Investigación::Sistema de Estudios de Posgrado::Ingeniería::Maestría Académica en Computación e Informátic
    corecore