10 research outputs found

    Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations

    Get PDF
    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.FWN – Publicaties zonder aanstelling Universiteit Leide

    Static and Dynamic Multimodal Optimization by Improved Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations

    Get PDF
    The covariance matrix self-adaptation evolution strategy with repelling subpopulations (RS-CMSA-ES) is one of the most successful multimodal optimization (MMO) methods currently available. However, some of its components may become inefficient in certain situations. This study introduces the second variant of this method, called RS-CMSA-ESII. It improves the adaptation schemes for the normalized taboo distances of the archived solutions and the covariance matrix of the subpopulation, the termination criteria for the subpopulations, and the way in which the infeasible solutions are treated. It also improves the time complexity of RS-CMSA-ES by updating the initialization procedure of a subpopulation and developing a more accurate metric for determining critical taboo regions. The effects of these modifications are illustrated by designing controlled numerical simulations. RS-CMSA-ESII is then compared with the most successful and recent niching methods for MMO on a widely adopted test suite. The results obtained reveal the superiority of RS-CMSA-ESII over these methods, including the winners of the competition on niching methods for MMO in previous years. Besides, this study extends RS-CMSA-ESII to dynamic MMO and compares it with a few recently proposed methods on the modified moving peak benchmark functions

    Seeking multiple solutions:an updated survey on niching methods and their applications

    Get PDF
    Multi-Modal Optimization (MMO) aiming to locate multiple optimal (or near-optimal) solutions in a single simulation run has practical relevance to problem solving across many fields. Population-based meta-heuristics have been shown particularly effective in solving MMO problems, if equipped with specificallydesigned diversity-preserving mechanisms, commonly known as niching methods. This paper provides an updated survey on niching methods. The paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of niching methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment. Furthermore, the paper surveys previous attempts at leveraging the capabilities of niching to facilitate various optimization tasks (e.g., multi-objective and dynamic optimization) and machine learning tasks (e.g., clustering, feature selection, and learning ensembles). A list of successful applications of niching methods to real-world problems is presented to demonstrate the capabilities of niching methods in providing solutions that are difficult for other optimization methods to offer. The significant practical value of niching methods is clearly exemplified through these applications. Finally, the paper poses challenges and research questions on niching that are yet to be appropriately addressed. Providing answers to these questions is crucial before we can bring more fruitful benefits of niching to real-world problem solving

    Stochastic and deterministic algorithms for continuous black-box optimization

    Get PDF
    Continuous optimization is never easy: the exact solution is always a luxury demand and the theory of it is not always analytical and elegant. Continuous optimization, in practice, is essentially about the efficiency: how to obtain the solution with same quality using as minimal resources (e.g., CPU time or memory usage) as possible? In this thesis, the number of function evaluations is considered as the most important resource to save. To achieve this goal, various efforts have been implemented and applied successfully. One research stream focuses on the so-called stochastic variation (mutation) operator, which conducts an (local) exploration of the search space. The efficiency of those operator has been investigated closely, which shows a good stochastic variation should be able to generate a good coverage of the local neighbourhood around the current search solution. This thesis contributes on this issue by formulating a novel stochastic variation that yields good space coverage. Algorithms and the Foundations of Software technolog

    Fitness Proportionate Niching: Harnessing The Power Of Evolutionary Algorithms For Evolving Cooperative Populations And Dynamic Clustering

    Get PDF
    Evolutionary algorithms work on the notion of best fit will survive criteria. This makes evolving a cooperative and diverse population in a competing environment via evolutionary algorithms a challenging task. Analogies to species interactions in natural ecological systems have been used to develop methods for maintaining diversity in a population. One such area that mimics species interactions in natural systems is the use of niching. Niching methods extend the application of EAs to areas that seeks to embrace multiple solutions to a given problem. The conventional fitness sharing technique has limitations when the multimodal fitness landscape has unequal peaks. Higher peaks are strong population attractors. And this technique suffers from the curse of population size in attempting to discover all optimum points. The use of high population size makes the technique computationally complex, especially when there is a big jump in fitness values of the peaks. This work introduces a novel bio-inspired niching technique, termed Fitness Proportionate Niching (FPN), based on the analogy of finite resource model where individuals share the resource of a niche in proportion to their actual fitness. FPN makes the search algorithm unbiased to the variation in fitness values of the peaks and hence mitigates the drawbacks of conventional fitness sharing. FPN extends the global search ability of Genetic Algorithms (GAs) for evolving hierarchical cooperation in genetics-based machine learning and dynamic clustering. To this end, this work introduces FPN based resource sharing which leads to the formation of a viable default hierarchy in classifiers for the first time. It results in the co-evolution of default and exception rules, which lead to a robust and concise model description. The work also explores the feasibility and success of FPN for dynamic clustering. Unlike most other clustering techniques, FPN based clustering does not require any a priori information on the distribution of the data

    Two-stage methods for multimodal optimization

    Get PDF
    Für viele praktische Optimierungsprobleme ist es ratsam nicht nur eine einzelne optimale Lösung zu suchen, sondern eine Menge von Lösungen die gut und untereinander verschieden sind. Die Argumentation hinter dieser Meinung ist, dass ein Entscheidungsträger möglicherweise nachträglich zusätzliche Kriterien einbringen möchte, die nicht im Optimierungsproblem enthalten waren. Gründe für die Nichtberücksichtigung im Optimierungsproblem sind zum Beispiel dass das notwendige Expertenwissen noch nicht formalisiert wurde, oder dass die Bewertung der Zusatzkriterien mehr oder weniger subjektiv abläuft. Das Forschungsgebiet für diese einkriteriellen Optimierungsprobleme mit Bedarf für eine Menge von mehreren Lösungen wird momentan mit dem Begriff multimodale Optimierung umschrieben. In dieser Arbeit wenden wir zweistufige Optimieralgorithmen, die aus sich abwechselnden globalen und lokalen Komponenten bestehen, auf diese Probleme an. Diese Algorithmen sind attraktiv für uns wegen ihrer Einfachheit und ihrer belegten Leistungsfähigkeit auf multimodalen Problemen. Das Hauptaugenmerk liegt darauf, die globale Phase zu verbessern, da lokale Suche schon ein gut erforschtes Themengebiet ist. Wir tun dies, indem wir vorher ausgewertete Punkte und bereits bekannte Optima in unserem globalen Samplingalgorithmus berücksichtigen. Unser Ansatz basiert auf der Maximierung der minimalen Distanz in einer Punktmenge, während Kanteneffekte, welche durch die Beschränktheit des Suchraums verursacht werden, durch geeignete Korrekturmaßnahmen verhindert werden. Experimente bestätigen die Überlegenheit dieses Algorithmus gegenüber zufällig gleichverteiltem Sampling und anderen Methoden in diversen Problemstellungen multimodaler Optimierung.For many practical optimization problems it seems advisable to seek not only a single optimal solution, but a diverse set of good solutions. The rationale behind this opinion is that a decision maker may want to consider additional criteria, which are not included in the optimization problem itself. Reasons for not including them are for example that the expert knowledge constituting the additional criteria has not been formalized or that the evaluation of the additional criteria is more or less subjective. The area containing single-objective problems with the need to identify a set of solutions is currently called multimodal optimization. In this work, we apply two-stage optimization algorithms, which consist of alternating global and local searches, to these problems. These algorithms are attractive because of their simplicity and their demonstrated performance on multimodal problems. The main focus is on improving the global stages, as local search is already a thoroughly investigated topic. This is done by considering previously sampled points and found optima in the global sampling, thus obtaining a super-uniform distribution. The approach is based on maximizing the minimal distance in a point set, while boundary effects of the box-constrained search space are avoided by correction methods. Experiments confirm the superiority of this algorithm over random uniform sampling and other methods in various different settings of multimodal optimization

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications
    corecore