23 research outputs found

    Multimedia courseware: Never mind the quality how much will it cost to develop?

    Get PDF
    This paper evaluates multimedia courseware costing techniques such as the US Airforce Interactive Courseware Method (Golas, 1993), CBT Analyst (Kearsley, 1985), CEAC (Schooley, 1988) and MEEM (Marshall, Samson, Dugard, & Scott, 1994) against the data from ten multimedia courseware developments. The Relative Error and Mean Absolute Relative Error (MARE) are calculated to allow comparison of the different methods

    Can k-NN imputation improve the performance of C4.5 with small software project data sets? A comparative evaluation

    Get PDF
    Missing data is a widespread problem that can affect the ability to use data to construct effective prediction systems. We investigate a common machine learning technique that can tolerate missing values, namely C4.5, to predict cost using six real world software project databases. We analyze the predictive performance after using the k-NN missing data imputation technique to see if it is better to tolerate missing data or to try to impute missing values and then apply the C4.5 algorithm. For the investigation, we simulated three missingness mechanisms, three missing data patterns, and five missing data percentages. We found that the k-NN imputation can improve the prediction accuracy of C4.5. At the same time, both C4.5 and k-NN are little affected by the missingness mechanism, but that the missing data pattern and the missing data percentage have a strong negative impact upon prediction (or imputation) accuracy particularly if the missing data percentage exceeds 40%

    New product development resource forecasting

    Get PDF
    Forecasting resource requirements for new product development (NPD) projects is essential for both strategic and tactical planning. Sophisticated, elegant planning tools to present data and inform decision-making do exist. However, in NPD, such tools run on unreliable, estimation-based resource information derived through undefined processes. This paper establishes that existing methods do not provide transparent, consistent, timely or accurate resource planning information, highlighting the need for a new approach to resource forecasting, specifically in the field of NPD. The gap between the practical issues and available methods highlights the possibility of developing a novel design of experiments approach to create resource forecasting models

    The usage of ISBSG data fields in software effort estimation: A systematic mapping study

    Full text link
    [EN] The International Software Benchmarking Standards Group (ISBSG) maintains a repository of data about completed software projects. A common use of the ISBSG dataset is to investigate models to estimate a software project's size, effort, duration, and cost. The aim of this paper is to determine which and to what extent variables in the ISBSG dataset have been used in software engineering to build effort estimation models. For that purpose a systematic mapping study was applied to 107 research papers, obtained after a filtering process, that were published from 2000 until the end of 2013, and which listed the independent variables used in the effort estimation models. The usage of ISBSG variables for filtering, as dependent variables, and as independent variables is described. The 20 variables (out of 71) mostly used as independent variables for effort estimation are identified and analysed in detail, with reference to the papers and types of estimation methods that used them. We propose guidelines that can help researchers make informed decisions about which ISBSG variables to select for their effort estimation models.González-Ladrón-De-Guevara, F.; Fernández-Diego, M.; Lokan, C. (2016). The usage of ISBSG data fields in software effort estimation: A systematic mapping study. Journal of Systems and Software. 113:188-215. doi:10.1016/j.jss.2015.11.040S18821511

    Object-oriented software development effort prediction using design patterns from object interaction analysis

    Get PDF
    Software project management is arguably the most important activity in modern software development projects. In the absence of realistic and objective management, the software development process cannot be managed in an effective way. Software development effort estimation is one of the most challenging and researched problems in project management. With the advent of object-oriented development, there have been studies to transpose some of the existing effort estimation methodologies to the new development paradigm. However, there is not in existence a holistic approach to estimation that allows for the refinement of an initial estimate produced in the requirements gathering phase through to the design phase. A SysML point methodology is proposed that is based on a common, structured and comprehensive modeling language (OMG SysML) that factors in the models that correspond to the primary phases of object-oriented development into producing an effort estimate. This dissertation presents a Function Point-like approach, named Pattern Point, which was conceived to estimate the size of object-oriented products using the design patterns found in object interaction modeling from the late OO analysis phase. In particular, two measures are proposed (PP1 and PP2) that are theoretically validated showing that they satisfy wellknown properties necessary for size measures. An initial empirical validation is performed that is meant to assess the usefulness and effectiveness of the proposed measures in predicting the development effort of object-oriented systems. Moreover, a comparative analysis is carried out; taking into account several other size measures. The experimental results show that the Pattern Point measure can be effectively used during the OOA phase to predict the effort values with a high degree of confidence. The PP2 metric yielded the best results with an aggregate PRED (0.25) = 0.874

    An Empirical investigation into software effort estimation by analogy

    Get PDF
    Most practitioners recognise the important part accurate estimates of development effort play in the successful management of major software projects. However, it is widely recognised that current estimation techniques are often very inaccurate, while studies (Heemstra 1992; Lederer and Prasad 1993) have shown that effort estimation research is not being effectively transferred from the research domain into practical application. Traditionally, research has been almost exclusively focused on the advancement of algorithmic models (e.g. COCOMO (Boehm 1981) and SLIM (Putnam 1978)), where effort is commonly expressed as a function of system size. However, in recent years there has been a discernible movement away from algorithmic models with non-algorithmic systems (often encompassing machine learning facets) being actively researched. This is potentially a very exciting and important time in this field, with new approaches regularly being proposed. One such technique, estimation by analogy, is the focus of this thesis. The principle behind estimation by analogy is that past experience can often provide insights and solutions to present problems. Software projects are characterised in terms of collectable features (such as the number of screens or the size of the functional requirements) and stored in a historical case base as they are completed. Once a case base of sufficient size has been cultivated, new projects can be estimated by finding similar historical projects and re-using the recorded effort. To make estimation by analogy feasible it became necessary to construct a software tool, dubbed ANGEL, which allowed the collection of historical project data and the generation of estimates for new software projects. A substantial empirical validation of the approach was made encompassing approximately 250 real historical software projects across eight industrial data sets, using stepwise regression as a benchmark. Significance tests on the results accepted the hypothesis (at the 1% confidence level) that estimation by analogy is a superior prediction system to stepwise regression in terms of accuracy. A study was also made of the sensitivity of the analogy approach. By growing project data sets in a pseudo time-series fashion it was possible to answer pertinent questions about the approach, such as, what are the effects of outlying projects and what is the minimum data set size? The main conclusions of this work are that estimation by analogy is a viable estimation technique that would seem to offer some advantages over algorithmic approaches including, improved accuracy, easier use of categorical features and an ability to operate even where no statistical relationships can be found
    corecore