7,355 research outputs found

    Resampling: an improvement of Importance Sampling in varying population size models

    Get PDF
    Sequential importance sampling algorithms have been defined to estimate likelihoods in models of ancestral population processes. However, these algorithms are based on features of the models with constant population size, and become inefficient when the population size varies in time, making likelihood-based inferences difficult in many demographic situations. In this work, we modify a previous sequential importance sampling algorithm to improve the efficiency of the likelihood estimation. Our procedure is still based on features of the model with constant size, but uses a resampling technique with a new resampling probability distribution depending on the pairwise composite likelihood. We tested our algorithm, called sequential importance sampling with resampling (SISR) on simulated data sets under different demographic cases. In most cases, we divided the computational cost by two for the same accuracy of inference, in some cases even by one hundred. This study provides the first assessment of the impact of such resampling techniques on parameter inference using sequential importance sampling, and extends the range of situations where likelihood inferences can be easily performed

    Firefly Algorithm: Recent Advances and Applications

    Full text link
    Nature-inspired metaheuristic algorithms, especially those based on swarm intelligence, have attracted much attention in the last ten years. Firefly algorithm appeared in about five years ago, its literature has expanded dramatically with diverse applications. In this paper, we will briefly review the fundamentals of firefly algorithm together with a selection of recent publications. Then, we discuss the optimality associated with balancing exploration and exploitation, which is essential for all metaheuristic algorithms. By comparing with intermittent search strategy, we conclude that metaheuristics such as firefly algorithm are better than the optimal intermittent search strategy. We also analyse algorithms and their implications for higher-dimensional optimization problems.Comment: 15 page

    Parameter tuning of software effort estimation models using antlion optimization

    Get PDF
    In this work, the antlion optimization (ALO) is employed due to its efficiency and wide applicability to estimate the parameters of four modified models of the basic constructive cost model (COCOMO) model. Three tests are carried out to show the effectiveness of ALO: first, it is used with Bailey and Basili dataset for the basic COCOMO Model and Sheta’s Model 1 and 2, and is compared with the firefly algorithm (FA), genetic algorithms (GA), and particle swarm optimization (PSO). Second, parameters of Sheta’s Model 1 and 2, Uysal’s Model 1 and 2 are optimized using Bailey and Basili dataset; results are compared with directed artificial bee colony algorithm (DABCA), GA, and simulated annealing (SA). Third, ALO is used with Basic COCOMO model and four large datasets, results are compared with hybrid bat inspired gravitational search algorithm (hBATGSA), improved BAT (IBAT), and BAT algorithms. Results of Test1 and Test2 show that ALO outperformed others, as for Test3, ALO is better than BAT and IBAT using MAE and the number of best estimations. ALO proofed achieving better results than hBATGSA for datasets 2 and 4 out of the four datasets explored in terms of MAE and the number of best estimates

    The usage of ISBSG data fields in software effort estimation: A systematic mapping study

    Full text link
    [EN] The International Software Benchmarking Standards Group (ISBSG) maintains a repository of data about completed software projects. A common use of the ISBSG dataset is to investigate models to estimate a software project's size, effort, duration, and cost. The aim of this paper is to determine which and to what extent variables in the ISBSG dataset have been used in software engineering to build effort estimation models. For that purpose a systematic mapping study was applied to 107 research papers, obtained after a filtering process, that were published from 2000 until the end of 2013, and which listed the independent variables used in the effort estimation models. The usage of ISBSG variables for filtering, as dependent variables, and as independent variables is described. The 20 variables (out of 71) mostly used as independent variables for effort estimation are identified and analysed in detail, with reference to the papers and types of estimation methods that used them. We propose guidelines that can help researchers make informed decisions about which ISBSG variables to select for their effort estimation models.González-Ladrón-De-Guevara, F.; Fernández-Diego, M.; Lokan, C. (2016). The usage of ISBSG data fields in software effort estimation: A systematic mapping study. Journal of Systems and Software. 113:188-215. doi:10.1016/j.jss.2015.11.040S18821511

    Software Fault Prediction using Bio-Inspired Algorithms to Select the Features to be employed: An Empirical Study

    Get PDF
    In recent past, the use of bio-inspired algorithms got a significant attention in software fault predictions, where they can be used to select the most relevant features for a dataset aiming to increase the prediction accuracy of estimation techniques. The most-earlier and widely investigated algorithms are Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). More recently, researchers have analyzed other algorithms inspired from nature. In this paper, we consider GA and PSO as baseline/benchmark algorithms and evaluate their performances against seven recently-employed bio-inspired algorithms and metaheuristics, namely Ant Colony Optimization, Bat Search, Bee Search, Cuckoo Search, Harmony Search, Multi-Objective Evolutionary Algorithm, and Tabu Search, for feature selection in software fault prediction. We present experiments with seven open source datasets and three estimation techniques: Random Forest, Support Vector Regression, and Linear Regression. We found that it is not always true that the recently introduced algorithms outperform the earlier introduced algorithms
    • …
    corecore