260 research outputs found

    Denoising Autoencoders for fast Combinatorial Black Box Optimization

    Full text link
    Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered problem instances, DAE-EDA is considerably faster than BOA and RBM-EDA, sometimes by orders of magnitude. The number of fitness evaluations is higher than for BOA, but competitive with RBM-EDA. These results show that DAEs can be useful tools for problems with low but non-negligible fitness evaluation costs.Comment: corrected typos and small inconsistencie

    Analyzing limits of effectiveness in different implementations of estimation of distribution algorithms

    Get PDF
    Conducting research in order to know the range of problems in which a search algorithm is effective constitutes a fundamental issue to understand the algorithm and to continue the development of new techniques. In this work, by progressively increasing the degree of interaction in the problem, we study to what extent different EDA implementations are able to reach the optimal solutions. Specifically, we deal with additively decomposable functions whose complexity essentially depends on the number of sub-functions added. With the aim of analyzing the limits of this type of algorithms, we take into account three common EDA implementations that only differ in the complexity of the probabilistic model. The results show that the ability of EDAs to solve problems quickly vanishes after certain degree of interaction with a phase-transition effect. This collapse of performance is closely related with the computational restrictions that this type of algorithms have to impose in the learning step in order to be efficiently applied. Moreover, we show how the use of unrestricted Bayesian networks to solve the problems rapidly becomes inefficient as the number of sub-functions increases. The results suggest that this type of models might not be the most appropriate tool for the the development of new techniques that solve problems with increasing degree of interaction. In general, the experiments proposed in the present work allow us to identify patterns of behavior in EDAs and provide new ideas for the analysis and development of this type of algorithms

    A quantitative analysis of estimation of distribution algorithms based on Bayesian networks

    Get PDF
    The successful application of estimation of distribution algorithms (EDAs) to solve different kinds of problems has reinforced their candidature as promising black-box optimization tools. However, their internal behavior is still not completely understood and therefore it is necessary to work in this direction in order to advance their development. This paper presents a new methodology of analysis which provides new information about the behavior of EDAs by quantitatively analyzing the probabilistic models learned during the search. We particularly focus on calculating the probabilities of the optimal solutions, the most probable solution given by the model and the best individual of the population at each step of the algorithm. We carry out the analysis by optimizing functions of different nature such as Trap5, two variants of Ising spin glass and Max-SAT. By using different structures in the probabilistic models, we also analyze the influence of the structural model accuracy in the quantitative behavior of EDAs. In addition, the objective function values of our analyzed key solutions are contrasted with their probability values in order to study the connection between function and probabilistic models. The results not only show information about the EDA behavior, but also about the quality of the optimization process and setup of the parameters, the relationship between the probabilistic model and the fitness function, and even about the problem itself. Furthermore, the results allow us to discover common patterns of behavior in EDAs and propose new ideas in the development of this type of algorithms

    Sub-structural Niching in Estimation of Distribution Algorithms

    Full text link
    We propose a sub-structural niching method that fully exploits the problem decomposition capability of linkage-learning methods such as the estimation of distribution algorithms and concentrate on maintaining diversity at the sub-structural level. The proposed method consists of three key components: (1) Problem decomposition and sub-structure identification, (2) sub-structure fitness estimation, and (3) sub-structural niche preservation. The sub-structural niching method is compared to restricted tournament selection (RTS)--a niching method used in hierarchical Bayesian optimization algorithm--with special emphasis on sustained preservation of multiple global solutions of a class of boundedly-difficult, additively-separable multimodal problems. The results show that sub-structural niching successfully maintains multiple global optima over large number of generations and does so with significantly less population than RTS. Additionally, the market share of each of the niche is much closer to the expected level in sub-structural niching when compared to RTS
    corecore