123 research outputs found

    Niching an estimation-of-distribution algorithm by hierarchical Gaussian mixture learning

    Get PDF
    Estimation-of-Distribution Algorithms (EDAs) have been applied with quite some success when solving real-valued optimization problems, especially in the case of Black Box Optimization (BBO). Generally, the performance of an EDA depends on the match between its driving probability distribution and the landscape of the problem being solved. Because most well-known EDAs, including CMA-ES, NES, and AMaLGaM, use a uni-modal search distribution, they have a high risk of getting trapped in local optima when a problem is multi-modal with a (moderate) number of relatively comparable modes. This risk could potentially be mitigated using niching methods that define multiple regions of interest where separate search distributions govern sub-populations. However, a key question is how to determine a suitable number of niches, especially in BBO. In this paper, we present a novel, adaptive niching approach that determines the niches through hierarchical clustering based on the correlation between the probability densities and fitness values of solutions. We test the performance of a combination of this niching approach with AMaLGaM on both new and well-known niching benchmark problems and ind that the new approach properly identifies multiple landscape modes, leading to much beter performance on multi-modal problems than with a non-niched, uni-modal EDA

    Discovering the Elite Hypervolume by Leveraging Interspecies Correlation

    Get PDF
    Evolution has produced an astonishing diversity of species, each filling a different niche. Algorithms like MAP-Elites mimic this divergent evolutionary process to find a set of behaviorally diverse but high-performing solutions, called the elites. Our key insight is that species in nature often share a surprisingly large part of their genome, in spite of occupying very different niches; similarly, the elites are likely to be concentrated in a specific "elite hypervolume" whose shape is defined by their common features. In this paper, we first introduce the elite hypervolume concept and propose two metrics to characterize it: the genotypic spread and the genotypic similarity. We then introduce a new variation operator, called "directional variation", that exploits interspecies (or inter-elites) correlations to accelerate the MAP-Elites algorithm. We demonstrate the effectiveness of this operator in three problems (a toy function, a redundant robotic arm, and a hexapod robot).Comment: In GECCO 201

    Learning to Generate Genotypes with Neural Networks

    Get PDF
    Neural networks and evolutionary computation have a rich intertwined history. They most commonly appear together when an evolutionary algorithm optimises the parameters and topology of a neural network for reinforcement learning problems, or when a neural network is applied as a surrogate fitness function to aid the evolutionary optimisation of expensive fitness functions. In this paper we take a different approach, asking the question of whether a neural network can be used to provide a mutation distribution for an evolutionary algorithm, and what advantages this approach may offer? Two modern neural network models are investigated, a Denoising Autoencoder modified to produce stochastic outputs and the Neural Autoregressive Distribution Estimator. Results show that the neural network approach to learning genotypes is able to solve many difficult discrete problems, such as MaxSat and HIFF, and regularly outperforms other evolutionary techniques

    MATEDA: A suite of EDA programs in Matlab

    Get PDF
    This paper describes MATEDA-2.0, a suite of programs in Matlab for estimation of distribution algorithms. The package allows the optimization of single and multi-objective problems with estimation of distribution algorithms (EDAs) based on undirected graphical models and Bayesian networks. The implementation is conceived for allowing the incorporation by the user of different combinations of selection, learning, sampling, and local search procedures. Other included methods allow the analysis of the structures learned by the probabilistic models, the visualization of particular features of these structures and the use of the probabilistic models as fitness modeling tools

    Model-based evolutionary algorithms

    Get PDF

    Stochastic and deterministic algorithms for continuous black-box optimization

    Get PDF
    Continuous optimization is never easy: the exact solution is always a luxury demand and the theory of it is not always analytical and elegant. Continuous optimization, in practice, is essentially about the efficiency: how to obtain the solution with same quality using as minimal resources (e.g., CPU time or memory usage) as possible? In this thesis, the number of function evaluations is considered as the most important resource to save. To achieve this goal, various efforts have been implemented and applied successfully. One research stream focuses on the so-called stochastic variation (mutation) operator, which conducts an (local) exploration of the search space. The efficiency of those operator has been investigated closely, which shows a good stochastic variation should be able to generate a good coverage of the local neighbourhood around the current search solution. This thesis contributes on this issue by formulating a novel stochastic variation that yields good space coverage. Algorithms and the Foundations of Software technolog

    Uncertainty evaluation of reservoir simulation models using particle swarms and hierarchical clustering

    Get PDF
    History matching production data in finite difference reservoir simulation models has been and always will be a challenge for the industry. The principal hurdles that need to be overcome are finding a match in the first place and more importantly a set of matches that can capture the uncertainty range of the simulation model and to do this in as short a time as possible since the bottleneck in this process is the length of time taken to run the model. This study looks at the implementation of Particle Swarm Optimisation (PSO) in history matching finite difference simulation models. Particle Swarms are a class of evolutionary algorithms that have shown much promise over the last decade. This method draws parallels from the social interaction of swarms of bees, flocks of birds and shoals of fish. Essentially a swarm of agents are allowed to search the solution hyperspace keeping in memory each individual’s historical best position and iteratively improving the optimisation by the emergent interaction of the swarm. An intrinsic feature of PSO is its local search capability. A sequential niching variation of the PSO has been developed viz. Flexi-PSO that enhances the exploration and exploitation of the hyperspace and is capable of finding multiple minima. This new variation has been applied to history matching synthetic reservoir simulation models to find multiple distinct history 3 matches to try to capture the uncertainty range. Hierarchical clustering is then used to post-process the history match runs to reduce the size of the ensemble carried forward for prediction. The success of the uncertainty modelling exercise is then assessed by checking whether the production profile forecasts generated by the ensemble covers the truth case
    corecore