16 research outputs found

    Particle swarm optimization with thresheld convergence

    No full text
    Many heuristic search techniques have concurrent processes of exploration and exploitation. In particle swarm optimization, an improved 'pbest' position can represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). The latter can interfere with the former since the identification of a new more promising region depends on finding a (random) solution in that region which is better than the current 'pbest'. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum – finding the best region to exploit then becomes the problem of finding the best random solution. However, a locally optimized solution from a poor region of the search space can be better than a random solution from a good region of the search space. Since exploitation can interfere with subsequent/concurrent exploration, it should be prevented during the early stages of the search process. In thresheld convergence, early exploitation is “held” back by a threshold function. Experiments show that the addition of thresheld convergence to particle swarm optimization can lead to large performance improvements in multi-modal search spaces

    Invited paper: A Review of Thresheld Convergence

    Get PDF
    A multi-modal search space can be defined as having multiple attraction basins – each basin has a single local optimum which is reached from all points in that basin when greedy local search is used. Optimization in multi-modal search spaces can then be viewed as a two-phase process. The first phase is exploration in which the most promising attraction basin is identified. The second phase is exploitation in which the best solution (i.e. the local optimum) within the previously identified attraction basin is attained. The goal of thresheld convergence is to improve the performance of search techniques during the first phase of exploration. The effectiveness of thresheld convergence has been demonstrated through applications to existing metaheuristics such as particle swarm optimization and differential evolution, and through the development of novel metaheuristics such as minimum population search and leaders and followers

    Simulated annealing with thresheld convergence

    No full text
    Stochastic search techniques for multi-modal search spaces require the ability to balance exploration with exploitation. Exploration is required to find the best region, and exploitation is required to find the best solution (i.e. the local optimum) within this region. Compared to hill climbing which is purely exploitative, simulated annealing probabilistically allows "backward" steps which facilitate exploration. However, the balance between exploration and exploitation in simulated annealing is biased towards exploitation - improving moves are always accepted, so local (greedy) search steps can occur at even the earliest stages of the search process. The purpose of "thresheld convergence" is to have these early-stage local search steps "held" back by a threshold function. It is hypothesized that early local search steps can interfere with the effectiveness of a search technique's (concurrent) mechanisms for global search. Experiments show that the addition of thresheld convergence to simulated annealing can lead to significant performance improvements in multi-modal search spaces.IEEE Computational Intelligence Societ

    Minimum Population Search, an Application to Molecular Docking

    Get PDF
    Computer modeling of protein-ligand interactions is one of the most important phases in a drug design process. Part of the process involves the optimization of highly multi-modal objective (scoring) functions. This research presents the Minimum Population Search heuristic as an alternative for solving these global unconstrained optimization problems. To determine the effectiveness of Minimum Population Search, a comparison with seven state-of-the-art search heuristics is performed. Being specifically designed for the optimization of large scale multi-modal problems, Minimum Population Search achieves excellent results on all of the tested complexes, especially when the amount of available function evaluations is strongly reduced. A first step is also made toward the design of hybrid algorithms based on the exploratory power of Minimum Population Search. Computational results show that hybridization leads to a further improvement in performance

    Multi-objective optimization approach based on Minimum Population Search algorithm

    Get PDF
    URL del artĂ­culo en la web de la Revista: https://www.upo.es/revistas/index.php/gecontec/article/view/4049Minimum Population Search is a recently developed metaheuristic for optimization of mono-objective continuous problems, which has proven to be a very effective optimizing large scale and multi-modal problems. One of its key characteristic is the ability to perform an efficient exploration of large dimensional spaces. We assume that this feature may prove useful when optimizing multi-objective problems, thus this paper presents a study of how it can be adapted to a multi-objective approach. We performed experiments and comparisons with five multi-objective selection processes and we test the effectiveness of Thresheld Convergence on this class of problems. Following this analysis we suggest a Multi-objective variant of the algorithm. The proposed algorithm is compared with multi-objective evolutionary algorithms IBEA, NSGA2 and SPEA2 on several well-known test problems. Subsequently, we present two hybrid approaches with the IBEA and NSGA-II, these hybrids allow to further improve the achieved results.Universidad Pablo de Olavid

    Online Selection of CMA-ES Variants

    Full text link
    In the field of evolutionary computation, one of the most challenging topics is algorithm selection. Knowing which heuristics to use for which optimization problem is key to obtaining high-quality solutions. We aim to extend this research topic by taking a first step towards a selection method for adaptive CMA-ES algorithms. We build upon the theoretical work done by van Rijn \textit{et al.} [PPSN'18], in which the potential of switching between different CMA-ES variants was quantified in the context of a modular CMA-ES framework. We demonstrate in this work that their proposed approach is not very reliable, in that implementing the suggested adaptive configurations does not yield the predicted performance gains. We propose a revised approach, which results in a more robust fit between predicted and actual performance. The adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested functions of the BBOB benchmark, with stable advantages of up to 23\%. An analysis of module activation indicates which modules are most crucial for the different phases of optimizing each of the 24 benchmark problems. The module activation also suggests that additional gains are possible when including the (B)IPOP modules, which we have excluded for this present work.Comment: To appear at Genetic and Evolutionary Computation Conference (GECCO'19) Appendix will be added in due tim

    Metapopulation Differential Co-Evolution of Trading Strategies in a Model Financial Market

    Get PDF

    Sequential vs. Integrated Algorithm Selection and Configuration: A Case Study for the Modular CMA-ES

    Get PDF
    When faced with a specific optimization problem, choosing which algorithm to use is always a tough task. Not only is there a vast variety of algorithms to select from, but these algorithms often are controlled by many hyperparameters, which need to be tuned in order to achieve the best performance possible. Usually, this problem is separated into two parts: algorithm selection and algorithm configuration. With the significant advances made in Machine Learning, however, these problems can be integrated into a combined algorithm selection and hyperparameter optimization task, commonly known as the CASH problem. In this work we compare sequential and integrated algorithm selection and configuration approaches for the case of selecting and tuning the best out of 4608 variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) tested on the Black Box Optimization Benchmark (BBOB) suite. We first show that the ranking of the modular CMA-ES variants depends to a large extent on the quality of the hyperparameters. This implies that even a sequential approach based on complete enumeration of the algorithm space will likely result in sub-optimal solutions. In fact, we show that the integrated approach manages to provide competitive results at a much smaller computational cost. We also compare two different mixed-integer algorithm configuration techniques, called irace and Mixed-Integer Parallel Efficient Global Optimization (MIP-EGO). While we show that the two methods differ significantly in their treatment of the exploration-exploitation balance, their overall performances are very similar

    Explorative data analysis of time series based algorithm features of CMA-ES variants

    Get PDF
    Algorithms and the Foundations of Software technolog
    corecore