10,234 research outputs found

    Feature weighting techniques for CBR in software effort estimation studies: A review and empirical evaluation

    Get PDF
    Context : Software effort estimation is one of the most important activities in the software development process. Unfortunately, estimates are often substantially wrong. Numerous estimation methods have been proposed including Case-based Reasoning (CBR). In order to improve CBR estimation accuracy, many researchers have proposed feature weighting techniques (FWT). Objective: Our purpose is to systematically review the empirical evidence to determine whether FWT leads to improved predictions. In addition we evaluate these techniques from the perspectives of (i) approach (ii) strengths and weaknesses (iii) performance and (iv) experimental evaluation approach including the data sets used. Method: We conducted a systematic literature review of published, refereed primary studies on FWT (2000-2014). Results: We identified 19 relevant primary studies. These reported a range of different techniques. 17 out of 19 make benchmark comparisons with standard CBR and 16 out of 17 studies report improved accuracy. Using a one-sample sign test this positive impact is significant (p = 0:0003). Conclusion: The actionable conclusion from this study is that our review of all relevant empirical evidence supports the use of FWTs and we recommend that researchers and practitioners give serious consideration to their adoption

    Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX)

    Full text link
    The modeling and analysis generic interface for external numerical codes (MAGIX) is a model optimizer developed under the framework of the coherent set of astrophysical tools for spectroscopy (CATS) project. The MAGIX package provides a framework of an easy interface between existing codes and an iterating engine that attempts to minimize deviations of the model results from available observational data, constraining the values of the model parameters and providing corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of ALMA (Atacama Large Millimeter Array), but can be used with other astronomical and with non-astronomical data.Comment: 12 pages, 15 figures, 2 tables, paper is also available at http://www.aanda.org/articles/aa/pdf/forth/aa20063-12.pd

    Insights on Research Techniques towards Cost Estimation in Software Design

    Get PDF
    Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript

    Optimizing complexity weight parameter of use case points estimation using particle swarm optimization

    Get PDF
    Among algorithmic-based frameworks for software development effort estimation, Use Case Points I s one of the most used. Use Case Points is a well-known estimation framework designed mainly for object-oriented projects. Use Case Points uses the use case complexity weight as its essential parameter. The parameter is calculated with the number of actors and transactions of the use case. Nevertheless, use case complexity weight is discontinuous, which can sometimes result in inaccurate measurements and abrupt classification of the use case. The objective of this work is to investigate the potential of integrating particle swarm optimization (PSO) with the Use Case Points framework. The optimizer algorithm is utilized to optimize the modified use case complexity weight parameter. We designed and conducted an experiment based on real-life data set from three software houses. The proposed model’s accuracy and performance evaluation metric is compared with other published results, which are standardized accuracy, effect size, mean balanced residual error, mean inverted balanced residual error, and mean absolute error. Moreover, the existing models as the benchmark are polynomial regression, multiple linear regression, weighted case-based reasoning with (PSO), fuzzy use case points, and standard Use Case Points. Experimental results show that the proposed model generates the best value of standardized accuracy of 99.27% and an effect size of 1.15 over the benchmark models. The results of our study are promising for researchers and practitioners because the proposed model is actually estimating, not guessing, and generating meaningful estimation with statistically and practically significant

    Optimizing Effort and Time Parameters of COCOMO II Estimation using Fuzzy Multi-objective PSO

    Get PDF
    The  estimation  of  software  effort  is  an  essential and  crucial   activity   for  the  software   development   life  cycle. Software effort estimation is a challenge that often appears on the project of making a software. A poor estimate will produce result in a worse project management.  Various software cost estimation model has been introduced  to resolve this problem. Constructive Cost Model II (COCOMO II Model) create large extent most considerable  and broadly  used as model  for cost estimation.  To estimate   the  effort  and  the  development   time  of  a  software project,  COCOMO  II model uses cost drivers,  scale factors  and line  of  code.  However,  the  model  is  still  lacking  in  terms  of accuracy both in effort and development  time estimation.  In this study,   we   do   investigate   the   influence   of   components   and attributes to achieve new better accuracy improvement on COCOMO II model. And we introduced the use of Gaussian Membership  Function  (GMF)  Fuzzy  Logic  and Multi-Objective Particle Swarm Optimization method (MOPSO) algorithms in calibrating  and optimizing  the COCOMO  II model parameters. The   proposed   method   is   applied   on   Nasa93   dataset.   The experiment  result of proposed method able to reduce error down to  11.891%  and  8.082%  from  the  perspective  of  COCOMO  II model.  The  method  has  achieved  better  results  than  those  of previous   researches   and  deals  proficient   with  inexplicit   data input and further improve reliability of the estimation method

    Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Method

    Get PDF
    Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%
    corecore