1,667 research outputs found

    Static and Dynamic Multimodal Optimization by Improved Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations

    Get PDF
    The covariance matrix self-adaptation evolution strategy with repelling subpopulations (RS-CMSA-ES) is one of the most successful multimodal optimization (MMO) methods currently available. However, some of its components may become inefficient in certain situations. This study introduces the second variant of this method, called RS-CMSA-ESII. It improves the adaptation schemes for the normalized taboo distances of the archived solutions and the covariance matrix of the subpopulation, the termination criteria for the subpopulations, and the way in which the infeasible solutions are treated. It also improves the time complexity of RS-CMSA-ES by updating the initialization procedure of a subpopulation and developing a more accurate metric for determining critical taboo regions. The effects of these modifications are illustrated by designing controlled numerical simulations. RS-CMSA-ESII is then compared with the most successful and recent niching methods for MMO on a widely adopted test suite. The results obtained reveal the superiority of RS-CMSA-ESII over these methods, including the winners of the competition on niching methods for MMO in previous years. Besides, this study extends RS-CMSA-ESII to dynamic MMO and compares it with a few recently proposed methods on the modified moving peak benchmark functions

    Evolutionary Algorithms with Mixed Strategy

    Get PDF

    OPT-GAN: Black-Box Global Optimization via Generative Adversarial Nets

    Full text link
    Black-box optimization (BBO) algorithms are concerned with finding the best solutions for problems with missing analytical details. Most classical methods for such problems are based on strong and fixed a priori assumptions, such as Gaussianity. However, the complex real-world problems, especially when the global optimum is desired, could be very far from the a priori assumptions because of their diversities, causing unexpected obstacles to these methods. In this study, we propose a generative adversarial net-based broad-spectrum global optimizer (OPT-GAN) which estimates the distribution of optimum gradually, with strategies to balance exploration-exploitation trade-off. It has potential to better adapt to the regularity and structure of diversified landscapes than other methods with fixed prior, e.g. Gaussian assumption or separability. Experiments conducted on BBO benchmarking problems and several other benchmarks with diversified landscapes exhibit that OPT-GAN outperforms other traditional and neural net-based BBO algorithms.Comment: M. Lu and S. Ning contribute equally. Submitted to IEEE transactions on Neural Networks and Learning System

    A Survey of Evolutionary Continuous Dynamic Optimization Over Two Decades:Part B

    Get PDF
    Many real-world optimization problems are dynamic. The field of dynamic optimization deals with such problems where the search space changes over time. In this two-part paper, we present a comprehensive survey of the research in evolutionary dynamic optimization for single-objective unconstrained continuous problems over the last two decades. In Part A of this survey, we propose a new taxonomy for the components of dynamic optimization algorithms, namely, convergence detection, change detection, explicit archiving, diversity control, and population division and management. In comparison to the existing taxonomies, the proposed taxonomy covers some additional important components, such as convergence detection and computational resource allocation. Moreover, we significantly expand and improve the classifications of diversity control and multi-population methods, which are under-represented in the existing taxonomies. We then provide detailed technical descriptions and analysis of different components according to the suggested taxonomy. Part B of this survey provides an indepth analysis of the most commonly used benchmark problems, performance analysis methods, static optimization algorithms used as the optimization components in the dynamic optimization algorithms, and dynamic real-world applications. Finally, several opportunities for future work are pointed out

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Metaheuristic-Based Algorithms for Optimizing Fractional-Order Controllers—A Recent, Systematic, and Comprehensive Review

    Get PDF
    Metaheuristic optimization algorithms (MHA) play a significant role in obtaining the best (optimal) values of the system’s parameters to improve its performance. This role is significantly apparent when dealing with systems where the classical analytical methods fail. Fractional-order (FO) systems have not yet shown an easy procedure to deal with the determination of their optimal parameters through traditional methods. In this paper, a recent, systematic. And comprehensive review is presented to highlight the role of MHA in obtaining the best set of gains and orders for FO controllers. The systematic review starts by exploring the most relevant publications related to the MHA and the FO controllers. The study is focused on the most popular controllers such as the FO-PI, FO-PID, FO Type-1 fuzzy-PID, and FO Type-2 fuzzy-PID. The time domain is restricted in the articles published through the last decade (2014:2023) in the most reputed databases such as Scopus, Web of Science, Science Direct, and Google Scholar. The identified number of papers, from the entire databases, has reached 850 articles. A Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology was applied to the initial set of articles to be screened and filtered to end up with a final list that contains 82 articles. Then, a thorough and comprehensive study was applied to the final list. The results showed that Particle Swarm Optimization (PSO) is the most attractive optimizer to the researchers to be used in the optimal parameters identification of the FO controllers as it attains about 25% of the published papers. In addition, the papers that used PSO as an optimizer have gained a high citation number despite the fact that the Chaotic Atom Search Optimization (ChASO) is the highest one, but it is used only once. Furthermore, the Integral of the Time-Weighted Absolute Error (ITAE) is the best nominated cost function. Based on our comprehensive literature review, this appears to be the first review paper that systematically and comprehensively addresses the optimization of the parameters of the fractional-order PI, PID, Type-1, and Type-2 fuzzy controllers with the use of MHAs. Therefore, the work in this paper can be used as a guide for researchers who are interested in working in this field

    Multi-Agent System Concepts Theory and Application Phases

    Get PDF
    • …
    corecore