270 research outputs found

    A hybrid swarm-based algorithm for single-objective optimization problems involving high-cost analyses

    Full text link
    In many technical fields, single-objective optimization procedures in continuous domains involve expensive numerical simulations. In this context, an improvement of the Artificial Bee Colony (ABC) algorithm, called the Artificial super-Bee enhanced Colony (AsBeC), is presented. AsBeC is designed to provide fast convergence speed, high solution accuracy and robust performance over a wide range of problems. It implements enhancements of the ABC structure and hybridizations with interpolation strategies. The latter are inspired by the quadratic trust region approach for local investigation and by an efficient global optimizer for separable problems. Each modification and their combined effects are studied with appropriate metrics on a numerical benchmark, which is also used for comparing AsBeC with some effective ABC variants and other derivative-free algorithms. In addition, the presented algorithm is validated on two recent benchmarks adopted for competitions in international conferences. Results show remarkable competitiveness and robustness for AsBeC.Comment: 19 pages, 4 figures, Springer Swarm Intelligenc

    Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations

    Full text link
    Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer optimization framework, on non-separable functions. First, we reveal empirical reasons of why decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores

    Estimation of Distribution Algorithms and Minimum Relative Entropy

    Get PDF
    In the field of optimization using probabilistic models of the search space, this thesis identifies and elaborates several advancements in which the principles of maximum entropy and minimum relative entropy from information theory are used to estimate a probability distribution. The probability distribution within the search space is represented by a graphical model (factorization, Bayesian network or junction tree). An estimation of distribution algorithm (EDA) is an evolutionary optimization algorithm which uses a graphical model to sample a population within the search space and then estimates a new graphical model from the selected individuals of the population. - So far, the Factorized Distribution Algorithm (FDA) builds a factorization or Bayesian network from a given additive structure of the objective function to be optimized using a greedy algorithm which only considers a subset of the variable dependencies. Important connections can be lost by this method. This thesis presents a heuristic subfunction merge algorithm which is able to consider all dependencies between the variables (as long as the marginal distributions of the model do not become too large). On a 2-D grid structure, this algorithm builds a pentavariate factorization which allows to solve the deceptive grid benchmark problem with a much smaller population size than the conventional factorization. Especially for small population sizes, calculating large marginal distributions from smaller ones using Maximum Entropy and iterative proportional fitting leads to a further improvement. - The second topic is the generalization of graphical models to loopy structures. Using the Bethe-Kikuchi approximation, the loopy graphical model (region graph) can learn the Boltzmann distribution of an objective function by a generalized belief propagation algorithm (GBP). It minimizes the free energy, a notion adopted from statistical physics which is equivalent to the relative entropy to the Boltzmann distribution. Previous attempts to combine the Kikuchi approximation with EDA have relied on an expensive Gibbs sampling procedure for generating a population from this loopy probabilistic model. In this thesis a combination with a factorization is presented which allows more efficient sampling. The free energy is generalized to incorporate the inverse temperature ß. The factorization building algorithm mentioned above can be employed here, too. The dynamics of GBP is investigated, and the method is applied on Ising spin glass ground state search. Small instances (7 x 7) are solved without difficulty. Larger instances (10 x 10 and 15 x 15) do not converge to the true optimum with large ß, but sampling from the factorization can find the optimum with about 1000-10000 sampling attempts, depending on the instance. If GBP does not converge, it can be replaced by a concave-convex procedure which guarantees convergence. - Third, if no probabilistic structure is given for the objective function, a Bayesian network can be learned to capture the dependencies in the population. The relative entropy between the population-induced distribution and the Bayesian network distribution is equivalent to the log-likelihood of the model. The log-likelihood has been generalized to the BIC/MDL score which reduces overfitting by punishing complicated structure of the Bayesian network. A previous information theoretic analysis of BIC/MDL in the context of EDA is continued, and empiric evidence is given that the method is able to learn the correct structure of an objective function, given a sufficiently large population. - Finally, a way to reduce the search space of EDA is presented by combining it with a local search heuristics. The Kernighan Lin hillclimber, known originally for the traveling salesman problem and graph bipartitioning, is generalized to arbitrary binary problems. It can be applied in a stand-alone manner, as an iterative 1+1 search algorithm, or combined with EDA. On the MAXSAT problem it performs in a similar scale to the specialized SAT solver Walksat. An analysis of the Kernighan Lin local optima indicates that the combination with an EDA is favorable. The thesis shows how evolutionary optimization can be improved using interdisciplinary results from information theory, statistics, probability calculus and statistical physics. The principles of information theory for estimating probability distributions are applicable in many areas. EDAs are a good application because an improved estimation affects directly the optimization success.Estimation of Distribution Algorithms und Minimierung der relativen Entropie Im Bereich der Optimierung mit probabilistischen Modellen des Suchraums werden einige Fortschritte identifiziert und herausgearbeitet, in denen die Prinzipien der maximalen Entropie und der minimalen relativen Entropie aus der Informationstheorie verwendet werden, um eine Wahrscheinlichkeitsverteilung zu schätzen. Die Wahrscheinlichkeitsverteilung im Suchraum wird durch ein graphisches Modell beschrieben (Faktorisierung, Bayessches Netz oder Verbindungsbaum). Ein Estimation of Distribution Algorithm (EDA) ist ein evolutionärer Optimierungsalgorithmus, der mit Hilfe eines graphischen Modells eine Population im Suchraum erzeugt und dann anhand der selektierten Individuen dieser Population ein neues graphisches Modell erzeugt. - Bislang baut der Factorized Distribution Algorithm (FDA) eine Faktorisierung oder ein Bayessches Netz aus einer gegebenen additiven Struktur der Zielfunktion durch einen Greedy-Algorithmus, der nur einen Teil der Verbindungen zwischen den Variablen berücksichtigt. Wichtige verbindungen können durch diese Methode verloren gehen. Diese Arbeit stellt einen heuristischen Subfunktionenverschmelzungsalgorithmus vor, der in der Lage ist, alle Abhängigkeiten zwischen den Variablen zu berücksichtigen (wofern die Randverteilungen des Modells nicht zu groß werden). Auf einem 2D-Gitter erzeugt dieser Algorithmus eine pentavariate Faktorisierung, die es ermöglicht, das Deceptive-Grid-Testproblem mit viel kleinerer Populationsgröße zu lösen als mit der konventionellen Faktorisierung. Insbesondere für kleine Populationsgrößen kann das Ergebnis noch verbessert werden, wenn große Randverteilungen aus kleineren vermittels des Prinzips der maximalen Entropie und des Iterative Proportional Fitting- Algorithmus berechnet werden. - Das zweite Thema ist die Verallgemeinerung graphischer Modelle zu zirkulären Strukturen. Mit der Bethe-Kikuchi-Approximation kann das zirkuläre graphische Modell (der Regionen-Graph) die Boltzmannverteilung einer Zielfunktion durch einen generalisierten Belief Propagation-Algorithmus (GBP) lernen. Er minimiert die freie Energie, eine Größe aus der statistischen Physik, die äquivalent zur relativen Entropie zur Boltzmannverteilung ist. Frühere Versuche, die Kikuchi-Approximation mit EDA zu verbinden, benutzen einen aufwendigen Gibbs-Sampling-Algorithmus, um eine Population aus dem zirkulären Wahrscheinlichkeitsmodell zu erzeugen. In dieser Arbeit wird eine Verbindung mit Faktorisierungen vorgestellt, die effizienteres Sampling erlaubt. Die freie Energie wird um die inverse Temperatur ß erweitert. Der oben erwähnte Algorithmus zur Erzeugung einer Faktorisierung kann auch hier angewendet werden. Die Dynamik von GBP wird untersucht und auf Ising-Modelle angewendet. Kleine Probleme (7 x 7) werden ohne Schwierigkeit gelöst. Größere Probleme (10 x 10 und 15 x 15) konvergieren mit großem ß nicht mehr zum wahren Optimum, aber durch Sampling von der Faktorisierung kann das Optimum bei einer Samplegröße von 1000 bis 10000, je nach Probleminstanz, gefunden werden. Wenn GBP nicht konvergiert, kann es durch eine Konkav-Konvex-Prozedur ersetzt werden, die Konvergenz garantiert. - Drittens kann, wenn für die Zielfunktion keine Struktur gegeben ist, ein Bayessches Netz gelernt werden, um die Abhängigkeiten in der Population zu erfassen. Die relative Entropie zwischen der Populationsverteilung und der Verteilung durch das Bayessche Netz ist äquivalent zur Log-Likelihood des Modells. Diese wurde erweitert zum BIC/MDL-Kriterium, das Überanpassung lindert, indem komplizierte Strukturen bestraft werden. Eine vorangegangene informationstheoretische Analyse von BIC/MDL im EDA-Bereich wird erweitert, und empirisch wird belegt, daß die Methode die korrekte Struktur einer Zielfunktion bei genügend großer Population lernen kann. - Schließlich wird vorgestellt, wie durch eine lokale Suchheuristik der Suchraum von EDA reduziert werden kann. Der Kernighan-Lin-Hillclimber, der ursprünglich für das Problem des Handlungsreisenden und Graphen-Bipartitionierung konzipiert ist, wird für beliebige binäre Probleme erweitert. Er kann allein angewandt werden, als iteratives 1+1-Suchverfahren, oder in Kombination mit EDA. Er löst das MAXSAT-Problem in ähnlicher Größenordnung wie der spezialisierte Hillclimber Walksat. Eine Analyse der lokalen Optima von Kernighan-Lin zeigt, daß die Kombination mit EDA vorteilhaft ist. Die Arbeit zeigt, wie evolutionäre Optimierung verbessert werden kann, indem interdisziplinäre Ergebnisse aus Informationstheorie, Statistik, Wahrscheinlichkeitsrechnung und statistischer Physik eingebracht werden. Die Prinzipien der Informationstheorie zur Schätzung von Wahrscheinlichkeitsverteilungen lassen sich in vielen Bereichen anwenden. EDAs sind eine gute Anwendung, denn eine verbesserte Schätzung beeinflußt direkt den Optimierungserfolg

    A review on probabilistic graphical models in evolutionary computation

    Get PDF
    Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms

    A Two-Tier Estimation of Distribution Algorithm for Composite Laminate Optimization

    Get PDF
    http://www.emse.fr/~leriche/mao_paper_2004.pdfInternational audienceThe paper proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is presented. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The paper demonstrates the efficiency of DDOA for laminate strength maximization problem where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA, the univariate marginal distribution algorithm. The paper specifically investigates how the compromise exploitation/exploration can be adjusted. It demonstrates that DDOA maintains a high level of exploration without sacrificing exploitation

    Q-Learning Induced Artificial Bee Colony for Noisy Optimization

    Get PDF
    The paper proposes a novel approach to adaptive selection of sample size for a trial solution of an evolutionary algorithm when noise of unknown distribution contaminates the objective surface. The sample size of a solution here is adapted based on the noisy fitness profile in the local surrounding of the given solution. The fitness estimate and the fitness variance of a sub-population surrounding the given solution are jointly used to signify the degree of noise contamination in its local neighborhood (LN). The adaptation of sample size based on the characteristics of the fitness landscape in the LN of a solution is realized here with the temporal difference Q-learning (TDQL). The merit of the present work lies in utilizing the reward-penalty based reinforcement learning mechanism of TDQL for sample size adaptation. This sidesteps the prerequisite setting of any specific functional form of relationship between the sample size requirement of a solution and the noisy fitness profile in its LN. Experiments undertaken reveal that the proposed algorithms, realized with artificial bee colony, significantly outperform the existing counterparts and the state-of-the-art algorithms

    From Understanding Genetic Drift to a Smart-Restart Mechanism for Estimation-of-Distribution Algorithms

    Full text link
    Estimation-of-distribution algorithms (EDAs) are optimization algorithms that learn a distribution on the search space from which good solutions can be sampled easily. A key parameter of most EDAs is the sample size (population size). If the population size is too small, the update of the probabilistic model builds on few samples, leading to the undesired effect of genetic drift. Too large population sizes avoid genetic drift, but slow down the process. Building on a recent quantitative analysis of how the population size leads to genetic drift, we design a smart-restart mechanism for EDAs. By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes. Via a mathematical runtime analysis, we prove a general performance guarantee for this smart-restart scheme. This in particular shows that in many situations where the optimal (problem-specific) parameter values are known, the restart scheme automatically finds these, leading to the asymptotically optimal performance. We also conduct an extensive experimental analysis. On four classic benchmark problems, we clearly observe the critical influence of the population size on the performance, and we find that the smart-restart scheme leads to a performance close to the one obtainable with optimal parameter values. Our results also show that previous theory-based suggestions for the optimal population size can be far from the optimal ones, leading to a performance clearly inferior to the one obtained via the smart-restart scheme. We also conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial optimization problems from the literature, the max-cut problem and the bipartition problem. Again, we observe that the smart-restart mechanism finds much better values for the population size than those suggested in the literature, leading to a much better performance.Comment: Accepted for publication in "Journal of Machine Learning Research". Extended version of our GECCO 2020 paper. This article supersedes arXiv:2004.0714
    • …
    corecore