3,002 research outputs found

    On-line Search History-assisted Restart Strategy for Covariance Matrix Adaptation Evolution Strategy

    Full text link
    Restart strategy helps the covariance matrix adaptation evolution strategy (CMA-ES) to increase the probability of finding the global optimum in optimization, while a single run CMA-ES is easy to be trapped in local optima. In this paper, the continuous non-revisiting genetic algorithm (cNrGA) is used to help CMA-ES to achieve multiple restarts from different sub-regions of the search space. The CMA-ES with on-line search history-assisted restart strategy (HR-CMA-ES) is proposed. The entire on-line search history of cNrGA is stored in a binary space partitioning (BSP) tree, which is effective for performing local search. The frequently sampled sub-region is reflected by a deep position in the BSP tree. When leaf nodes are located deeper than a threshold, the corresponding sub-region is considered a region of interest (ROI). In HR-CMA-ES, cNrGA is responsible for global exploration and suggesting ROI for CMA-ES to perform an exploitation within or around the ROI. CMA-ES restarts independently in each suggested ROI. The non-revisiting mechanism of cNrGA avoids to suggest the same ROI for a second time. Experimental results on the CEC 2013 and 2017 benchmark suites show that HR-CMA-ES performs better than both CMA-ES and cNrGA. A positive synergy is observed by the memetic cooperation of the two algorithms.Comment: 8 pages, 9 figure

    Continuous non-revisiting genetic algorithm

    Get PDF
    The non-revisiting genetic algorithm (NrGA) is extended to handle continuous search space. The extended NrGA model, Continuous NrGA (cNrGA), employs the same tree-structure archive of NrGA to memorize the evaluated solutions, in which the search space is divided into non-overlapped partitions according to the distribution of the solutions. cNrGA is a bi-modulus evolutionary algorithm consisting of the genetic algorithm module (GAM) and the adaptive mutation module (AMM). When GAM generates an offspring, the offspring is sent to AMM and is mutated according to the density of the solutions stored in the memory archive. For a point in the search space with high solution-density, it infers a high probability that the point is close to the optimum and hence a near search is suggested. Alternatively, a far search is recommended for a point with low solution-density. Benefitting from the space partitioning scheme, a fast solution-density approximation is obtained. Also, the adaptive mutation scheme naturally avoid the generation of out-of-bound solutions. The performance of cNrGA is tested on 14 benchmark functions on dimensions ranging from 2 to 40. It is compared with real coded GA, differential evolution, covariance matrix adaptation evolution strategy and two improved particle swarm optimization. The simulation results show that cNrGA outperforms the other algorithms for multi-modal function optimization.published_or_final_versio

    Continuous non-revisiting genetic algorithm with random search space re-partitioning and one-gene-flip mutation

    Get PDF
    Special Session on Evolutionary Computer VisionIn continuous non-revisiting genetic algorithm (cNrGA), the solution set with different order leads to different density estimation and hence different mutation step size. As a result, the performance of cNrGA depends on the order of the evaluated solutions. In this paper, we propose to remove this dependence by a search space re-partitioning strategy. At each iteration, the strategy re-shuffles the solutions into random order. The re-ordered sequence is then used to construct a new density tree, which leads to a new space partition sets. Afterwards, instead of randomly picking a mutant within a partition, a new adaptive one-gene-flip mutation is applied. Motivated from the fact that the proposed adaptive mutation concerns only small amount of partitions, we propose a new density tree construction algorithm. This algorithm refuses to partition the sub-regions which do not contain any individual to be mutated, which simplifies the tree topology as well as speeds up the construction time. The new cNrGA integrated with the proposed re-partitioning strategy (cNrGA/RP/OGF) is examined on 19 benchmark functions at dimensions ranging from 2 to 40. The simulation results show that cNrGA/RP/OGF is significantly superior to the original cNrGA at most of the test functions. Its average performance is also better than those of six benchmark EAs. © 2010 IEEE.published_or_final_versio

    A study of operator and parameter choices in non-revisiting genetic algorithm

    Get PDF
    We study empirically the effects of operator and parameter choices on the performance of the non-revisiting genetic algorithm (NrGA). For a suite of 14 benchmark functions that include both uni-modal and multi-modal functions, it is found that NrGA is insensitive to the axis resolution of the problem, which is a good feature. From the empirical experiments, for operators, it is found that crossover is an essential operator for NrGA, and the best crossover operator is uniform crossover, while the best selection operator is elitist selection. For parameters, a small population, with a population size strictly larger than 1, should be used; the performance is monotonically increasing with crossover rate and the optimal crossover rate is 0.5. The results of this paper provide empirical guidelines for operator designs and parameter settings of NrGA. © 2009 IEEE.published_or_final_versio

    Continuous non-revisiting genetic algorithm with overlapped search sub-region

    Get PDF
    In continuous non-revisiting genetic algorithm (cNrGA), search space is partitioned into sub-regions according to the distribution of evaluated solutions. The partitioned subregion serves as mutation range such that the corresponding mutation is adaptive and parameter-less. As pointed out by Chow and Yuen, the boundary condition of the mutation in cNrGA is too restricted that the exploitative power of cNrGA is reduced. In this paper, we tackle this structural problem of cNrGA by a new formulation of mutation range. When sub-region is formulated as which certain overlap exists between adjacent sub-regions, this creates a soft boundary and it allows individual move from a sub-region to another with better fitness. This modified cNrGA is named cNrGA with overlapped search sub-region (cNrGA/OL/OGF). By comparing with another work on this problem, Continuous non-revisiting genetic algorithm with randomly re-partitioned BSP tree (cNrGA/RP/OGF), it has an advantage on processing speed. The proposed algorithm is examined on 34 benchmark functions at dimensions ranging from 2 to 40. The results show that the proposed algorithm is superior to the original cNrGA, cNrGA/RP/OGF and covariance matrix adaptation evolutionary strategy (CMA-ES). © 2012 IEEE.published_or_final_versio

    A non-revisiting simulated annealing algorithm

    Get PDF
    In this article, a non-revisiting simulated annealing algorithm (NrSA) is proposed. NrSA is an integration of the non-revisiting scheme and standard simulated annealing (SA). It guarantees that every generated neighbor must not be visited before. This property leads to reduction on the computation cost on evaluating time consuming and expensive objective functions such as surface registration, optimized design and energy management of heating, ventilating and air conditioning systems. Meanwhile, the prevention on function re-evaluation also speeds up the convergence. Furthermore, due to the nature of the non-revisiting scheme, the returned non-revisited solutions from the scheme can be treated as self-adaptive solutions, such that no parametric neighbor picking scheme is involved in NrSA. Thus NrSA can be identified as a parameter-less SA. The simulation results show that NrSA is superior to adaptive SA (ASA) on both uni-modal and multi-modal functions with dimension up to 40. We also illustrate that the overhead and archive size of NrSA are insignificant, so it is practical for real world applications. © 2008 IEEE.published_or_final_versio
    • …
    corecore