18 research outputs found

    Differential evolution with thresheld convergence

    No full text
    During the search process of differential evolution (DE), each new solution may represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). This concurrent exploitation can interfere with exploration since the identification of a new more promising region depends on finding a (random) solution in that region which is better than its target solution. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum – finding the best region to exploit then becomes the problem of finding the best random solution. However, differential evolution is characterized by an initial period of exploration followed by rapid convergence. Once the population starts converging, the difference vectors become shorter, more exploitation is performed, and an accelerating convergence occurs. This rapid convergence can occur well before the algorithm’s budget of function evaluations is exhausted; that is, the algorithm can converge prematurely. In thresheld convergence, early exploitation is “held” back by a threshold function, allowing a longer exploration phase. This paper presents a new adaptive thresheld convergence mechanism which helps DE achieve large performance improvements in multi-modal search spaces

    Invited paper: A Review of Thresheld Convergence

    Get PDF
    A multi-modal search space can be defined as having multiple attraction basins – each basin has a single local optimum which is reached from all points in that basin when greedy local search is used. Optimization in multi-modal search spaces can then be viewed as a two-phase process. The first phase is exploration in which the most promising attraction basin is identified. The second phase is exploitation in which the best solution (i.e. the local optimum) within the previously identified attraction basin is attained. The goal of thresheld convergence is to improve the performance of search techniques during the first phase of exploration. The effectiveness of thresheld convergence has been demonstrated through applications to existing metaheuristics such as particle swarm optimization and differential evolution, and through the development of novel metaheuristics such as minimum population search and leaders and followers

    Particle swarm optimization with thresheld convergence

    No full text
    Many heuristic search techniques have concurrent processes of exploration and exploitation. In particle swarm optimization, an improved 'pbest' position can represent a new more promising region of the search space (exploration) or a better solution within the current region (exploitation). The latter can interfere with the former since the identification of a new more promising region depends on finding a (random) solution in that region which is better than the current 'pbest'. Ideally, every sampled solution will have the same relative fitness with respect to its nearby local optimum – finding the best region to exploit then becomes the problem of finding the best random solution. However, a locally optimized solution from a poor region of the search space can be better than a random solution from a good region of the search space. Since exploitation can interfere with subsequent/concurrent exploration, it should be prevented during the early stages of the search process. In thresheld convergence, early exploitation is “held” back by a threshold function. Experiments show that the addition of thresheld convergence to particle swarm optimization can lead to large performance improvements in multi-modal search spaces

    Simulated annealing with thresheld convergence

    No full text
    Stochastic search techniques for multi-modal search spaces require the ability to balance exploration with exploitation. Exploration is required to find the best region, and exploitation is required to find the best solution (i.e. the local optimum) within this region. Compared to hill climbing which is purely exploitative, simulated annealing probabilistically allows "backward" steps which facilitate exploration. However, the balance between exploration and exploitation in simulated annealing is biased towards exploitation - improving moves are always accepted, so local (greedy) search steps can occur at even the earliest stages of the search process. The purpose of "thresheld convergence" is to have these early-stage local search steps "held" back by a threshold function. It is hypothesized that early local search steps can interfere with the effectiveness of a search technique's (concurrent) mechanisms for global search. Experiments show that the addition of thresheld convergence to simulated annealing can lead to significant performance improvements in multi-modal search spaces.IEEE Computational Intelligence Societ

    Minimum Population Search, an Application to Molecular Docking

    Get PDF
    Computer modeling of protein-ligand interactions is one of the most important phases in a drug design process. Part of the process involves the optimization of highly multi-modal objective (scoring) functions. This research presents the Minimum Population Search heuristic as an alternative for solving these global unconstrained optimization problems. To determine the effectiveness of Minimum Population Search, a comparison with seven state-of-the-art search heuristics is performed. Being specifically designed for the optimization of large scale multi-modal problems, Minimum Population Search achieves excellent results on all of the tested complexes, especially when the amount of available function evaluations is strongly reduced. A first step is also made toward the design of hybrid algorithms based on the exploratory power of Minimum Population Search. Computational results show that hybridization leads to a further improvement in performance

    Multi-objective optimization approach based on Minimum Population Search algorithm

    Get PDF
    URL del artĂ­culo en la web de la Revista: https://www.upo.es/revistas/index.php/gecontec/article/view/4049Minimum Population Search is a recently developed metaheuristic for optimization of mono-objective continuous problems, which has proven to be a very effective optimizing large scale and multi-modal problems. One of its key characteristic is the ability to perform an efficient exploration of large dimensional spaces. We assume that this feature may prove useful when optimizing multi-objective problems, thus this paper presents a study of how it can be adapted to a multi-objective approach. We performed experiments and comparisons with five multi-objective selection processes and we test the effectiveness of Thresheld Convergence on this class of problems. Following this analysis we suggest a Multi-objective variant of the algorithm. The proposed algorithm is compared with multi-objective evolutionary algorithms IBEA, NSGA2 and SPEA2 on several well-known test problems. Subsequently, we present two hybrid approaches with the IBEA and NSGA-II, these hybrids allow to further improve the achieved results.Universidad Pablo de Olavid

    A simple strategy for maintaining diversity and reducing crowding in differential evolution

    No full text
    Differential evolution (DE) is a widely-effective population-based continuous optimiser that requires convergence to automatically scale its moves. However, once its population has begun to converge its ability to conduct global search is diminished, as the difference vectors used to generate new solutions are derived from the current population members' positions. In multi-modal search spaces DE may converge too rapidly, i.e., before adequately exploring the search space to identify the best region(s) in which to conduct its finer-grained search. Traditional crowding or niching techniques can be computationally costly or fail to compare new solutions with the most appropriate existing population member. This paper proposes a simple intervention strategy that compares each new solution with the population member it is most likely to be near, and prevents those moves that are below a threshold that decreases over the algorithm's run, allowing the algorithm to ultimately converge. Comparisons with a standard DE algorithm on a number of multi-modal problems indicate that the proposed technique can achieve real and sizable improvements.IEEE Computational Intelligence Societ

    Metapopulation Differential Co-Evolution of Trading Strategies in a Model Financial Market

    Get PDF

    Experimental analysis on the operation of Particle Swarm Optimization

    Get PDF
    In Particle Swarm Optimization, it has been observed that swarms often stall as opposed to converge. A stall occurs when all of the forward progress that could occur is instead rejected as Failed Exploration. Since the swarms particles are in good regions of the search space with the potential to make more progress, the introduction of perturbations to the pbest positions can lead to significant improvements in the performance of standard Particle Swarm Optimization. The pbest perturbation has been supported by a line search technique that can identify unimodal, globally convex, and non-globally convex search spaces, as well as the approximate size of attraction basin. A deeper analysis of the stall condition reveals that it involves clusters of particles that are performing exploitation, and these clusters are separated by individual particles that are performing exploration. This stall pattern can be identified by a newly developed method that is efficient, accurate, real-time, and search space independent. A more targeted (heterogenous) modification for stall is presented for globally convex search spaces
    corecore