11,311 research outputs found

    A method for parameter calibration and relevance estimation in evolutionary algorithms

    Full text link

    Disease Outbreaks: Tuning Predictive Machine Learning

    Get PDF
    Climate change is expected to exacerbate diarrhoea outbreaks in developing nations, a leading cause of morbidity and mortality in such regions. The development of predictive models with the ability to capture complex relationships between climate factors and diarrhoea may be effective for diarrhoea outbreak control. Various supervised Machine Learning (ML) algorithms and Deep Learning (DL) methods have been used in developing predictive models for various disease. Despite their advances in a range of healthcare applications, overall method task performance still largely depends on available training data and parameter settings which is a significant challenge for most predictive machine learning methods. This study investigates the impact of Relevance Estimation and Value Calibration (REVAC), an evolutionary parameter optimization method applied to predictive task performance of various ML and DL methods applied to ranges of real-world and synthetic data-sets (diarrhoea and climate based) for daily diarrhoea outbreak prediction in a regional case-study (South African provinces). Preliminary results indicate that REVAC is better suited for the DL models regardless of the data-set used for making predictions

    Efficient learning in ABC algorithms

    Full text link
    Approximate Bayesian Computation has been successfully used in population genetics to bypass the calculation of the likelihood. These methods provide accurate estimates of the posterior distribution by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for ensuring a suitable approximation quality of the posterior distribution are still high. To alleviate the computational burden, we propose an adaptive, sequential algorithm that runs faster than other ABC algorithms but maintains accuracy of the approximation. This proposal relies on the sequential Monte Carlo sampler of Del Moral et al. (2012) but is calibrated to reduce the number of simulations from the model. The paper concludes with numerical experiments on a toy example and on a population genetic study of Apis mellifera, where our algorithm was shown to be faster than traditional ABC schemes

    Comparing parameter tuning methods for evolutionary algorithms

    Get PDF
    Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a non-trivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research – hopefully inspiring fellow researchers for further work. Index Terms — evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVES Evolutionary Algorithms (EA) form a rich class of stochasti

    Costs and benefits of tuning parameters of evolutionary algorithms

    Get PDF
    Abstract. We present an empirical study on the impact of different design choices on the performance of an evolutionary algorithm (EA). Four EA components are considered—parent selection, survivor selection, recombination and mutation—and for each component we study the impact of choosing the right operator and of tuning its free parameter(s). We tune 120 different combinations of EA operators to 4 different classes of fitness landscapes and measure the cost of tuning. We find that components differ greatly in importance. Typically the choice of operator for parent selection has the greatest impact, and mutation needs the most tuning. Regarding individual EAs however, the impact of design choices for one component depends on the choices for other components, as well as on the available amount of resources for tuning.
    • …
    corecore