59,440 research outputs found
Average Convergence Rate of Evolutionary Algorithms
In evolutionary optimization, it is important to understand how fast
evolutionary algorithms converge to the optimum per generation, or their
convergence rate. This paper proposes a new measure of the convergence rate,
called average convergence rate. It is a normalised geometric mean of the
reduction ratio of the fitness difference per generation. The calculation of
the average convergence rate is very simple and it is applicable for most
evolutionary algorithms on both continuous and discrete optimization. A
theoretical study of the average convergence rate is conducted for discrete
optimization. Lower bounds on the average convergence rate are derived. The
limit of the average convergence rate is analysed and then the asymptotic
average convergence rate is proposed
An analytic expression of relative approximation error for a class of evolutionary algorithms
An important question in evolutionary computation is how good solutions
evolutionary algorithms can produce. This paper aims to provide an analytic
analysis of solution quality in terms of the relative approximation error,
which is defined by the error between 1 and the approximation ratio of the
solution found by an evolutionary algorithm. Since evolutionary algorithms are
iterative methods, the relative approximation error is a function of
generations. With the help of matrix analysis, it is possible to obtain an
exact expression of such a function. In this paper, an analytic expression for
calculating the relative approximation error is presented for a class of
evolutionary algorithms, that is, (1+1) strictly elitist evolution algorithms.
Furthermore, analytic expressions of the fitness value and the average
convergence rate in each generation are also derived for this class of
evolutionary algorithms. The approach is promising, and it can be extended to
non-elitist or population-based algorithms too
A new approach to estimating the expected first hitting time of evolutionary algorithms
AbstractEvolutionary algorithms (EA) have been shown to be very effective in solving practical problems, yet many important theoretical issues of them are not clear. The expected first hitting time is one of the most important theoretical issues of evolutionary algorithms, since it implies the average computational time complexity. In this paper, we establish a bridge between the expected first hitting time and another important theoretical issue, i.e., convergence rate. Through this bridge, we propose a new general approach to estimating the expected first hitting time. Using this approach, we analyze EAs with different configurations, including three mutation operators, with/without population, a recombination operator and a time variant mutation operator, on a hard problem. The results show that the proposed approach is helpful for analyzing a broad range of evolutionary algorithms. Moreover, we give an explanation of what makes a problem hard to EAs, and based on the recognition, we prove the hardness of a general problem
Parameters identification of unknown delayed genetic regulatory networks by a switching particle swarm optimization algorithm
The official published version can be found at the link below.This paper presents a novel particle swarm optimization (PSO) algorithm based on Markov chains and competitive penalized method. Such an algorithm is developed to solve global optimization problems with applications in identifying unknown parameters of a class of genetic regulatory networks (GRNs). By using an evolutionary factor, a new switching PSO (SPSO) algorithm is first proposed and analyzed, where the velocity updating equation jumps from one mode to another according to a Markov chain, and acceleration coefficients are dependent on mode switching. Furthermore, a leader competitive penalized multi-learning approach (LCPMLA) is introduced to improve the global search ability and refine the convergent solutions. The LCPMLA can automatically choose search strategy using a learning and penalizing mechanism. The presented SPSO algorithm is compared with some well-known PSO algorithms in the experiments. It is shown that the SPSO algorithm has faster local convergence speed, higher accuracy and algorithm reliability, resulting in better balance between the global and local searching of the algorithm, and thus generating good performance. Finally, we utilize the presented SPSO algorithm to identify not only the unknown parameters but also the coupling topology and time-delay of a class of GRNs.This research was partially supported by the National Natural Science Foundation of PR China (Grant No. 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No. 200802550007), the Key Creative Project of Shanghai Education Community (Grant No. 09ZZ66), the Key Foundation Project of Shanghai (Grant No. 09JC1400700), the Engineering and Physical Sciences Research Council EPSRC of the UK under Grant No. GR/S27658/01, the International Science and Technology Cooperation Project of China under Grant No. 2009DFA32050, an International Joint Project sponsored by the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany
Multi-population methods with adaptive mutation for multi-modal optimization problems
open access journalThis paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity, the multi-population technique can be applied to maintain the diversity in the population and the convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive mutation operator, which determines two different mutation probabilities for different sites of the solutions. The probabilities are updated by the fitness and distribution of solutions in the search space during the evolution process. The experimental results demonstrate the performance of the proposed algorithm based on a set of benchmark problems in comparison with relevant algorithms
A multi-objective genetic algorithm for the design of pressure swing adsorption
Pressure Swing Adsorption (PSA) is a cyclic separation process, more advantageous over other separation options for middle scale processes. Automated tools for the design of PSA
processes would be beneficial for the development of the technology, but their development is
a difficult task due to the complexity of the simulation of PSA cycles and the computational
effort needed to detect the performance at cyclic steady state.
We present a preliminary investigation of the performance of a custom multi-objective genetic
algorithm (MOGA) for the optimisation of a fast cycle PSA operation, the separation of
air for N2 production. The simulation requires a detailed diffusion model, which involves coupled
nonlinear partial differential and algebraic equations (PDAEs). The efficiency of MOGA
to handle this complex problem has been assessed by comparison with direct search methods.
An analysis of the effect of MOGA parameters on the performance is also presented
Importance mixing: Improving sample reuse in evolutionary policy search methods
Deep neuroevolution, that is evolutionary policy search methods based on deep
neural networks, have recently emerged as a competitor to deep reinforcement
learning algorithms due to their better parallelization capabilities. However,
these methods still suffer from a far worse sample efficiency. In this paper we
investigate whether a mechanism known as "importance mixing" can significantly
improve their sample efficiency. We provide a didactic presentation of
importance mixing and we explain how it can be extended to reuse more samples.
Then, from an empirical comparison based on a simple benchmark, we show that,
though it actually provides better sample efficiency, it is still far from the
sample efficiency of deep reinforcement learning, though it is more stable
- …