573 research outputs found

    Enhancing Cooperative Coevolution for Large Scale Optimization by Adaptively Constructing Surrogate Models

    Full text link
    It has been shown that cooperative coevolution (CC) can effectively deal with large scale optimization problems (LSOPs) through a divide-and-conquer strategy. However, its performance is severely restricted by the current context-vector-based sub-solution evaluation method since this method needs to access the original high dimensional simulation model when evaluating each sub-solution and thus requires many computation resources. To alleviate this issue, this study proposes an adaptive surrogate model assisted CC framework. This framework adaptively constructs surrogate models for different sub-problems by fully considering their characteristics. For the single dimensional sub-problems obtained through decomposition, accurate enough surrogate models can be obtained and used to find out the optimal solutions of the corresponding sub-problems directly. As for the nonseparable sub-problems, the surrogate models are employed to evaluate the corresponding sub-solutions, and the original simulation model is only adopted to reevaluate some good sub-solutions selected by surrogate models. By these means, the computation cost could be greatly reduced without significantly sacrificing evaluation quality. Empirical studies on IEEE CEC 2010 benchmark functions show that the concrete algorithm based on this framework is able to find much better solutions than the conventional CC algorithms and a non-CC algorithm even with much fewer computation resources.Comment: arXiv admin note: text overlap with arXiv:1802.0974

    On Selection of a Benchmark by Determining the Algorithms' Qualities

    Get PDF
    ABSTRACT: The authors got the motivation for writing the article based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with this, the CEC Competitions on Real-Parameter Single-Objective Optimization benchmarks that were issued several times in the last decade, serve as a testbed for evaluating the collection of nature-inspired algorithms selected in our study. Indeed, this article addresses two research questions: (1) How the selected benchmark affects the ranking of the particular algorithm, and (2) If it is possible to find the best algorithm capable of outperforming all the others on all the selected benchmarks. Ten outstanding algorithms (also winners of particular competitions) from different periods in the last decade were collected and applied to benchmarks issued during the same time period. A comparative analysis showed that there is a strong correlation between the rankings of the algorithms and the benchmarks used, although some deviations arose in ranking the best algorithms. The possible reasons for these deviations were exposed and commented on.This work was supported in part by the Slovenian Research Agency (Projects J2-1731 and L7-9421) under Grant P2-0041, in part by the Project PDE-GIR of the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie under Grant 778035, and in part by the Spanish Ministry of Science, Innovation and Universities (Computer Science National Program) of the Agencia Estatal de Investigacion and European Funds EFRD (AEI/FEDER, UE) under Grant TIN2017–89275-R

    A prescription of methodological guidelines for comparing bio-inspired optimization algorithms

    Get PDF
    Bio-inspired optimization (including Evolutionary Computation and Swarm Intelligence) is a growing research topic with many competitive bio-inspired algorithms being proposed every year. In such an active area, preparing a successful proposal of a new bio-inspired algorithm is not an easy task. Given the maturity of this research field, proposing a new optimization technique with innovative elements is no longer enough. Apart from the novelty, results reported by the authors should be proven to achieve a significant advance over previous outcomes from the state of the art. Unfortunately, not all new proposals deal with this requirement properly. Some of them fail to select appropriate benchmarks or reference algorithms to compare with. In other cases, the validation process carried out is not defined in a principled way (or is even not done at all). Consequently, the significance of the results presented in such studies cannot be guaranteed. In this work we review several recommendations in the literature and propose methodological guidelines to prepare a successful proposal, taking all these issues into account. We expect these guidelines to be useful not only for authors, but also for reviewers and editors along their assessment of new contributions to the field.This work was supported by grants from the Spanish Ministry of Science (TIN2016-8113-R, TIN2017-89517-P and TIN2017-83132-C2- 2-R) and Universidad Politécnica de Madrid (PINV-18-XEOGHQ-19- 4QTEBP). Eneko Osaba and Javier Del Ser-would also like to thank the Basque Government for its funding support through the ELKARTEK and EMAITEK programs. Javier Del Ser-receives funding support from the Consolidated Research Group MATHMODE (IT1294-19) granted by the Department of Education of the Basque Government

    Evolutionary framework with reinforcement learning-based mutation adaptation

    Get PDF
    Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p

    On the pathological behavior of adaptive differential evolution on hybrid objective functions

    Full text link
    Most state-of-the-art Differential Evolution (DE) algorithms are adaptive DEs with online parameter adaptation. We investigate the behavior of adaptive DE on a class of hy-brid functions, where independent groups of variables are associated with different component objective functions. An experimental evaluation of 3 state-of-the-art adaptive DEs (JADE, SHADE, jDE) shows that hybrid functions are &quot;ada-ptive-DE-hard&quot;. That is, adaptive DEs have signicant fail-ure rates on these new functions. In-depth analysis of the adaptive behavior of the DEs reveals that their parameter adaptation mechanisms behave in a pathological manner on this class of problems, resulting in over-adaptation for one of the components of the hybrids and poor overall performance. Thus, this class of deceptive benchmarks pose a signicant challenge for DE

    MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning

    Full text link
    Recently, Meta-Black-Box Optimization with Reinforcement Learning (MetaBBO-RL) has showcased the power of leveraging RL at the meta-level to mitigate manual fine-tuning of low-level black-box optimizers. However, this field is hindered by the lack of a unified benchmark. To fill this gap, we introduce MetaBox, the first benchmark platform expressly tailored for developing and evaluating MetaBBO-RL methods. MetaBox offers a flexible algorithmic template that allows users to effortlessly implement their unique designs within the platform. Moreover, it provides a broad spectrum of over 300 problem instances, collected from synthetic to realistic scenarios, and an extensive library of 19 baseline methods, including both traditional black-box optimizers and recent MetaBBO-RL methods. Besides, MetaBox introduces three standardized performance metrics, enabling a more thorough assessment of the methods. In a bid to illustrate the utility of MetaBox for facilitating rigorous evaluation and in-depth analysis, we carry out a wide-ranging benchmarking study on existing MetaBBO-RL methods. Our MetaBox is open-source and accessible at: https://github.com/GMC-DRL/MetaBox.Comment: Accepted at NuerIPS 202
    • …
    corecore