50,575 research outputs found
Multi-population methods with adaptive mutation for multi-modal optimization problems
open access journalThis paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity, the multi-population technique can be applied to maintain the diversity in the population and the convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive mutation operator, which determines two different mutation probabilities for different sites of the solutions. The probabilities are updated by the fitness and distribution of solutions in the search space during the evolution process. The experimental results demonstrate the performance of the proposed algorithm based on a set of benchmark problems in comparison with relevant algorithms
Performance analysis for genetic algorithms.
Genetic algorithms have been shown effective for solving complex optimization problems such as job scheduling, machine learning, pattern recognition, and assembly planning. Due to the random process involved in genetic algorithms, the analysis of performance characteristics of genetic algorithms is a challenging research topic. Studied in this dissertation are methods to analyze convergence of genetic algorithms and to investigate whether modifications made to genetic algorithms, such as varying the operator rates during the iterative process, improve their performance. Both statistical analysis, which is used for investigation of different modifications to the genetic algorithm, and probability analysis, which is used to derive the expectation of convergence, are used in the study. The Wilcoxon signed rank test is used to examine the effects of changing parameters in genetic algorithms during the iterations. A Markov chain is derived to show how the random selection process affects the genetic evolution, including the so called genetic drift and preferential selection. A link distance is introduced as a numerical index for the study of the convergence process of order-based genetic algorithms. Also studied are the effects of random selection, mutation operator, and the combination of both to the expected average link distance. The genetic drift is shown to enforce the convergence exponentially with increase in the number of iterations. The mutation operator, on the other hand, suppresses the convergence. The combined results of these two parameters lead to a general formula for the estimation of the expected number of iterations needed to achieve convergence for the order-based genetic algorithm with selection and mutation and provide important insights about how order-based genetic algorithms converge
Quantum Genetic Algorithm with Individuals in Multiple Registers
Genetic algorithms are heuristic optimization techniques inspired by
Darwinian evolution, which are characterized by successfully finding robust
solutions for optimization problems. Here, we propose a subroutine-based
quantum genetic algorithm with individuals codified in independent registers.
This distinctive codification allows our proposal to depict all the fundamental
elements characterizing genetic algorithms, i.e. population-based search with
selection of many individuals, crossover, and mutation. Our subroutine-based
construction permits us to consider several variants of the algorithm. For
instance, we firstly analyze the performance of two different quantum cloning
machines, a key component of the crossover subroutine. Indeed, we study two
paradigmatic examples, namely, the biomimetic cloning of quantum observables
and the Bu\v zek-Hillery universal quantum cloning machine, observing a faster
average convergence of the former, but better final populations of the latter.
Additionally, we analyzed the effect of introducing a mutation subroutine,
concluding a minor impact on the average performance. Furthermore, we introduce
a quantum channel analysis to prove the exponential convergence of our
algorithm and even predict its convergence-ratio. This tool could be extended
to formally prove results on the convergence of general non-unitary
iteration-based algorithms
Inheritance-Based Diversity Measures for Explicit Convergence Control in Evolutionary Algorithms
Diversity is an important factor in evolutionary algorithms to prevent
premature convergence towards a single local optimum. In order to maintain
diversity throughout the process of evolution, various means exist in
literature. We analyze approaches to diversity that (a) have an explicit and
quantifiable influence on fitness at the individual level and (b) require no
(or very little) additional domain knowledge such as domain-specific distance
functions. We also introduce the concept of genealogical diversity in a broader
study. We show that employing these approaches can help evolutionary algorithms
for global optimization in many cases.Comment: GECCO '18: Genetic and Evolutionary Computation Conference, 2018,
Kyoto, Japa
Recommended from our members
An enhanced artificial neural network with a shuffled complex evolutionary global optimization with principal component analysis
The classical Back-Propagation (BP) scheme with gradient-based optimization in training Artificial Neural Networks (ANNs) suffers from many drawbacks, such as the premature convergence, and the tendency of being trapped in local optimums. Therefore, as an alternative for the BP and gradient-based optimization schemes, various Evolutionary Algorithms (EAs), i.e., Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Simulated Annealing (SA), and Differential Evolution (DE), have gained popularity in the field of ANN weight training. This study applied a new efficient and effective Shuffled Complex Evolutionary Global Optimization Algorithm with Principal Component Analysis – University of California Irvine (SP-UCI) to the weight training process of a three-layer feed-forward ANN. A large-scale numerical comparison is conducted among the SP-UCI-, PSO-, GA-, SA-, and DE-based ANNs on 17 benchmark, complex, and real-world datasets. Results show that SP-UCI-based ANN outperforms other EA-based ANNs in the context of convergence and generalization. Results suggest that the SP-UCI algorithm possesses good potential in support of the weight training of ANN in real-word problems. In addition, the suitability of different kinds of EAs on training ANN is discussed. The large-scale comparison experiments conducted in this paper are fundamental references for selecting proper ANN weight training algorithms in practice
Multi-agent collaborative search : an agent-based memetic multi-objective optimization algorithm applied to space trajectory design
This article presents an algorithm for multi-objective optimization that blends together a number of heuristics. A population of agents combines heuristics that aim at exploring the search space both globally and in a neighbourhood of each agent. These heuristics are complemented with a combination of a local and global archive. The novel agent-based algorithm is tested at first on a set of standard problems and then on three specific problems in space trajectory design. Its performance is compared against a number of state-of-the-art multi-objective optimization algorithms that use the Pareto dominance as selection criterion: non-dominated sorting genetic algorithm (NSGA-II), Pareto archived evolution strategy (PAES), multiple objective particle swarm optimization (MOPSO), and multiple trajectory search (MTS). The results demonstrate that the agent-based search can identify parts of the Pareto set that the other algorithms were not able to capture. Furthermore, convergence is statistically better although the variance of the results is in some cases higher
Global Evolutionary Algorithms in the Design of Electromagnetic Band Gap Structures with Suppressed Surface Waves Propagation
The paper is focused on the automated design and optimization of electromagnetic band gap structures suppressing the propagation of surface waves. For the optimization, we use different global evolutionary algorithms like the genetic algorithm with the single-point crossover (GAs) and the multi-point (GAm) one, the differential evolution (DE) and particle swarm optimization (PSO). The algorithms are mutually compared in terms of convergence velocity and accuracy. The developed technique is universal (applicable for any unit cell geometry). The method is based on the dispersion diagram calculation in CST Microwave Studio (CST MWS) and optimization in Matlab. A design example of a mushroom structure with simultaneous electromagnetic band gap properties (EBG) and the artificial magnetic conductor ones (AMC) in the required frequency band is presented
System Architecture Optimization Using Hidden Genes Genetic Algorithms with Applications in Space Trajectory Optimization
In this dissertation, the concept of hidden genes genetic algorithms is developed. In system architecture optimization problems, the topology of the solution is unknown and hence, the number of design variables is variable. Hidden genes genetic algorithms are genetic algorithm based methods that are developed to handle such problems by hiding some genes in the chromosomes. The genes in the hidden genes genetic algorithms evolve through selection, mutation, and crossover operations. To determine if a gene is hidden or not, binary tags are assigned to them. The value of the tags determine the status of the genes. Different mechanisms are proposed for the evolution of the tags. Some mechanisms utilize stochastic operations while others are based on deterministic operations. All the proposed mechanisms are tested on mathematical and space trajectory optimization problems. Moreover, Markov chain models of the mechanisms are derived and their convergence is investigated analytically. The results show that the proposed concept are capable to search for the optimal solution by autonomously enabling the algorithms to assign the hidden genes
Quantum vs classical genetic algorithms: A numerical comparison shows faster convergence
Genetic algorithms are heuristic optimization techniques inspired by
Darwinian evolution. Quantum computation is a new computational paradigm which
exploits quantum resources to speed up information processing tasks. Therefore,
it is sensible to explore the potential enhancement in the performance of
genetic algorithms by introducing quantum degrees of freedom. Along this line,
a modular quantum genetic algorithm has recently been proposed, with
individuals encoded in independent registers comprising exchangeable quantum
subroutines [arXiv:2203.15039], which leads to different variants. Here, we
perform a numerical comparison among quantum and classical genetic algorithms,
which was missed in previous literature. In order to isolate the effect of the
quantum resources in the performance, the classical variants have been selected
to resemble the fundamental characteristics of the quantum genetic algorithms.
Under these conditions, we encode an optimization problem in a two-qubit
Hamiltonian and face the problem of finding its ground state. A numerical
analysis based on a sample of 200 random cases shows that some quantum variants
outperform all classical ones in convergence speed towards a near-to-optimal
result. Additionally, we have considered a diagonal Hamiltonian and the
Hamiltonian of the hydrogen molecule to complete the analysis with two relevant
use-cases. If this advantage holds for larger systems, quantum genetic
algorithms would provide a new tool to address optimization problems with
quantum computers.Comment: 7 pages, 4 figures, submitted to the IEEE Symposium Series On
Computational Intelligence 202
- …