17 research outputs found

    Using Covariance Matrix Adaptation Evolutionary Strategy to boost the search accuracy in hierarchic memetic computations

    Get PDF
    Many global optimization problems arising naturally in science and engineering exhibit some form of intrinsic ill-posedness, such as multimodality and insensitivity. Severe ill-posedness precludes the use of standard regularization techniques and necessitates more specialized approaches, usually comprised of two separate stages - global phase, that determines the problem's modality and provides rough approximations of the solutions, and a local phase, which re fines these approximations. In this work, we attempt to improve one of the most efficient currently known approaches - Hierarchic Memetic Strategy (HMS) - by incorporating the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) into its local phase. CMA-ES is a stochastic optimization algorithm that in some sense mimics the behavior of population-based evolutionary algorithms without explicitly evolving the population. This way, it avoids, to an extent, the associated cost of multiple evaluations of the objective function. We compare the performance of the HMS on relatively simple multimodal benchmark problems and on an engineering problem. To do so, we consider two con gurations: the CMA-ES and the standard SEA (Simple Evolutionary Algorithm). The results demonstrate that the HMS with CMA-ES in the local phase requires less objective function evaluations to provide the same accuracy, making this approach more efficient than the standard SEA

    Evolution strategies for robust optimization

    Get PDF
    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but fail in practice. Robust optimization is the practice of optimization that actively accounts for uncertainties and/or noise. Evolutionary Algorithms form a class of optimization algorithms that use the principle of evolution to find good solutions to optimization problems. Because uncertainty and noise are indispensable parts of nature, this class of optimization algorithms seems to be a logical choice for robust optimization scenarios. This thesis provides a clear definition of the term robust optimization and a comparison and practical guidelines on how Evolution Strategies, a subclass of Evolutionary Algorithms for real-parameter optimization problems, should be adapted for such scenarios.UBL - phd migration 201

    Scalarized Preferences in Multi-objective Optimization

    Get PDF
    Multikriterielle Optimierungsprobleme verfĂŒgen ĂŒber keine Lösung, die optimal in jeder Zielfunktion ist. Die Schwierigkeit solcher Probleme liegt darin eine Kompromisslösung zu finden, die den PrĂ€ferenzen des Entscheiders genĂŒgen, der den Kompromiss implementiert. Skalarisierung – die Abbildung des Vektors der Zielfunktionswerte auf eine reelle Zahl – identifiziert eine einzige Lösung als globales PrĂ€ferenzenoptimum um diese Probleme zu lösen. Allerdings generieren Skalarisierungsmethoden keine zusĂ€tzlichen Informationen ĂŒber andere Kompromisslösungen, die die PrĂ€ferenzen des Entscheiders bezĂŒglich des globalen Optimums verĂ€ndern könnten. Um dieses Problem anzugehen stellt diese Dissertation eine theoretische und algorithmische Analyse skalarisierter PrĂ€ferenzen bereit. Die theoretische Analyse besteht aus der Entwicklung eines Ordnungsrahmens, der PrĂ€ferenzen als Problemtransformationen charakterisiert, die prĂ€ferierte Untermengen der Paretofront definieren. Skalarisierung wird als Transformation der Zielmenge in diesem Ordnungsrahmen dargestellt. Des Weiteren werden Axiome vorgeschlagen, die wĂŒnschenswerte Eigenschaften von Skalarisierungsfunktionen darstellen. Es wird gezeigt unter welchen Bedingungen existierende Skalarisierungsfunktionen diese Axiome erfĂŒllen. Die algorithmische Analyse kennzeichnet PrĂ€ferenzen anhand des Resultats, das ein Optimierungsalgorithmus generiert. Zwei neue Paradigmen werden innerhalb dieser Analyse identifiziert. FĂŒr beide Paradigmen werden Algorithmen entworfen, die skalarisierte PrĂ€ferenzeninformationen verwenden: PrĂ€ferenzen-verzerrte Paretofrontapproximationen verteilen Punkte ĂŒber die gesamte Paretofront, fokussieren aber mehr Punkte in Regionen mit besseren Skalarisierungswerten; multimodale PrĂ€ferenzenoptima sind Punkte, die lokale Skalarisierungsoptima im Zielraum darstellen. Ein Drei-Stufen-Algorith\-mus wird entwickelt, der lokale Skalarisierungsoptima approximiert und verschiedene Methoden werden fĂŒr die unterschiedlichen Stufen evaluiert. Zwei Realweltprobleme werden vorgestellt, die die NĂŒtzlichkeit der beiden Algorithmen illustrieren. Das erste Problem besteht darin FahrplĂ€ne fĂŒr ein Blockheizkraftwerk zu finden, die die erzeugte ElektrizitĂ€t und WĂ€rme maximieren und den Kraftstoffverbrauch minimiert. PrĂ€ferenzen-verzerrte Approximationen generieren mehr Energie-effiziente Lösungen, unter denen der Entscheider seine favorisierte Lösung auswĂ€hlen kann, indem er die Konflikte zwischen den drei Zielen abwĂ€gt. Das zweite Problem beschĂ€ftigt sich mit der Erstellung von FahrplĂ€nen fĂŒr GerĂ€te in einem WohngebĂ€ude, so dass Energiekosten, Kohlenstoffdioxidemissionen und thermisches Unbehagen minimiert werden. Es wird gezeigt, dass lokale Skalarisierungsoptima FahrplĂ€ne darstellen, die eine gute Balance zwischen den drei Zielen bieten. Die Analyse und die Experimente, die in dieser Arbeit vorgestellt werden, ermöglichen es Entscheidern bessere Entscheidungen zu treffen indem Methoden angewendet werden, die mehr Optionen generieren, die mit den PrĂ€ferenzen der Entscheider ĂŒbereinstimmen

    A Survey of Evolutionary Continuous Dynamic Optimization Over Two Decades:Part B

    Get PDF
    Many real-world optimization problems are dynamic. The field of dynamic optimization deals with such problems where the search space changes over time. In this two-part paper, we present a comprehensive survey of the research in evolutionary dynamic optimization for single-objective unconstrained continuous problems over the last two decades. In Part A of this survey, we propose a new taxonomy for the components of dynamic optimization algorithms, namely, convergence detection, change detection, explicit archiving, diversity control, and population division and management. In comparison to the existing taxonomies, the proposed taxonomy covers some additional important components, such as convergence detection and computational resource allocation. Moreover, we significantly expand and improve the classifications of diversity control and multi-population methods, which are under-represented in the existing taxonomies. We then provide detailed technical descriptions and analysis of different components according to the suggested taxonomy. Part B of this survey provides an indepth analysis of the most commonly used benchmark problems, performance analysis methods, static optimization algorithms used as the optimization components in the dynamic optimization algorithms, and dynamic real-world applications. Finally, several opportunities for future work are pointed out

    Two-stage methods for multimodal optimization

    Get PDF
    FĂŒr viele praktische Optimierungsprobleme ist es ratsam nicht nur eine einzelne optimale Lösung zu suchen, sondern eine Menge von Lösungen die gut und untereinander verschieden sind. Die Argumentation hinter dieser Meinung ist, dass ein EntscheidungstrĂ€ger möglicherweise nachtrĂ€glich zusĂ€tzliche Kriterien einbringen möchte, die nicht im Optimierungsproblem enthalten waren. GrĂŒnde fĂŒr die NichtberĂŒcksichtigung im Optimierungsproblem sind zum Beispiel dass das notwendige Expertenwissen noch nicht formalisiert wurde, oder dass die Bewertung der Zusatzkriterien mehr oder weniger subjektiv ablĂ€uft. Das Forschungsgebiet fĂŒr diese einkriteriellen Optimierungsprobleme mit Bedarf fĂŒr eine Menge von mehreren Lösungen wird momentan mit dem Begriff multimodale Optimierung umschrieben. In dieser Arbeit wenden wir zweistufige Optimieralgorithmen, die aus sich abwechselnden globalen und lokalen Komponenten bestehen, auf diese Probleme an. Diese Algorithmen sind attraktiv fĂŒr uns wegen ihrer Einfachheit und ihrer belegten LeistungsfĂ€higkeit auf multimodalen Problemen. Das Hauptaugenmerk liegt darauf, die globale Phase zu verbessern, da lokale Suche schon ein gut erforschtes Themengebiet ist. Wir tun dies, indem wir vorher ausgewertete Punkte und bereits bekannte Optima in unserem globalen Samplingalgorithmus berĂŒcksichtigen. Unser Ansatz basiert auf der Maximierung der minimalen Distanz in einer Punktmenge, wĂ€hrend Kanteneffekte, welche durch die BeschrĂ€nktheit des Suchraums verursacht werden, durch geeignete Korrekturmaßnahmen verhindert werden. Experimente bestĂ€tigen die Überlegenheit dieses Algorithmus gegenĂŒber zufĂ€llig gleichverteiltem Sampling und anderen Methoden in diversen Problemstellungen multimodaler Optimierung.For many practical optimization problems it seems advisable to seek not only a single optimal solution, but a diverse set of good solutions. The rationale behind this opinion is that a decision maker may want to consider additional criteria, which are not included in the optimization problem itself. Reasons for not including them are for example that the expert knowledge constituting the additional criteria has not been formalized or that the evaluation of the additional criteria is more or less subjective. The area containing single-objective problems with the need to identify a set of solutions is currently called multimodal optimization. In this work, we apply two-stage optimization algorithms, which consist of alternating global and local searches, to these problems. These algorithms are attractive because of their simplicity and their demonstrated performance on multimodal problems. The main focus is on improving the global stages, as local search is already a thoroughly investigated topic. This is done by considering previously sampled points and found optima in the global sampling, thus obtaining a super-uniform distribution. The approach is based on maximizing the minimal distance in a point set, while boundary effects of the box-constrained search space are avoided by correction methods. Experiments confirm the superiority of this algorithm over random uniform sampling and other methods in various different settings of multimodal optimization

    Improvements on the bees algorithm for continuous optimisation problems

    Get PDF
    This work focuses on the improvements of the Bees Algorithm in order to enhance the algorithm’s performance especially in terms of convergence rate. For the first enhancement, a pseudo-gradient Bees Algorithm (PG-BA) compares the fitness as well as the position of previous and current bees so that the best bees in each patch are appropriately guided towards a better search direction after each consecutive cycle. This method eliminates the need to differentiate the objective function which is unlike the typical gradient search method. The improved algorithm is subjected to several numerical benchmark test functions as well as the training of neural network. The results from the experiments are then compared to the standard variant of the Bees Algorithm and other swarm intelligence procedures. The data analysis generally confirmed that the PG-BA is effective at speeding up the convergence time to optimum. Next, an approach to avoid the formation of overlapping patches is proposed. The Patch Overlap Avoidance Bees Algorithm (POA-BA) is designed to avoid redundancy in search area especially if the site is deemed unprofitable. This method is quite similar to Tabu Search (TS) with the POA-BA forbids the exact exploitation of previously visited solutions along with their corresponding neighbourhood. Patches are not allowed to intersect not just in the next generation but also in the current cycle. This reduces the number of patches materialise in the same peak (maximisation) or valley (minimisation) which ensures a thorough search of the problem landscape as bees are distributed around the scaled down area. The same benchmark problems as PG-BA were applied against this modified strategy to a reasonable success. Finally, the Bees Algorithm is revised to have the capability of locating all of the global optimum as well as the substantial local peaks in a single run. These multi-solutions of comparable fitness offers some alternatives for the decision makers to choose from. The patches are formed only if the bees are the fittest from different peaks by using a hill-valley mechanism in this so called Extended Bees Algorithm (EBA). This permits the maintenance of diversified solutions throughout the search process in addition to minimising the chances of getting trap. This version is proven beneficial when tested with numerous multimodal optimisation problems

    An Investigation of Factors Influencing Algorithm Selection for High Dimensional Continuous Optimisation Problems

    Get PDF
    The problem of algorithm selection is of great importance to the optimisation community, with a number of publications present in the Body-of-Knowledge. This importance stems from the consequences of the No-Free-Lunch Theorem which states that there cannot exist a single algorithm capable of solving all possible problems. However, despite this importance, the algorithm selection problem has of yet failed to gain widespread attention . In particular, little to no work in this area has been carried out with a focus on large-scale optimisation; a field quickly gaining momentum in line with advancements and influence of big data processing. As such, it is not as yet clear as to what factors, if any, influence the selection of algorithms for very high-dimensional problems (> 1000) - and it is entirely possible that algorithms that may not work well in lower dimensions may in fact work well in much higher dimensional spaces and vice-versa. This work therefore aims to begin addressing this knowledge gap by investigating some of these influencing factors for some common metaheuristic variants. To this end, typical parameters native to several metaheuristic algorithms are firstly tuned using the state-of-the-art automatic parameter tuner, SMAC. Tuning produces separate parameter configurations of each metaheuristic for each of a set of continuous benchmark functions; specifically, for every algorithm-function pairing, configurations are found for each dimensionality of the function from a geometrically increasing scale (from 2 to 1500 dimensions). The nature of this tuning is therefore highly computationally expensive necessitating the use of SMAC. Using these sets of parameter configurations, a vast amount of performance data relating to the large-scale optimisation of our benchmark suite by each metaheuristic was subsequently generated. From the generated data and its analysis, several behaviours presented by the metaheuristics as applied to large-scale optimisation have been identified and discussed. Further, this thesis provides a concise review of the relevant literature for the consumption of other researchers looking to progress in this area in addition to the large volume of data produced, relevant to the large-scale optimisation of our benchmark suite by the applied set of common metaheuristics. All work presented in this thesis was funded by EPSRC grant: EP/J017515/1 through the DAASE project

    An Algorithm for Evolving Protocol Constraints

    Get PDF
    Centre for Intelligent Systems and their ApplicationsWe present an investigation into the design of an evolutionary mechanism for multiagent protocol constraint optimisation. Starting with a review of common population based mechanisms we discuss the properties of the mechanisms used by these search methods. We derive a novel algorithm for optimisation of vectors of real numbers and empirically validate the efficacy of the design by comparing against well known results from the literature. We discuss the application of an optimiser to a novel problem and remark upon the relevance of the no free lunch theorem. We show the relative performance of the optimiser is strong and publish details of a new best result for the Keane optimisation problem. We apply the final algorithm to the multi-agent protocol optimisation problem and show the design process was successful
    corecore