465 research outputs found
Particle swarm optimization with state-based adaptive velocity limit strategy
Velocity limit (VL) has been widely adopted in many variants of particle
swarm optimization (PSO) to prevent particles from searching outside the
solution space. Several adaptive VL strategies have been introduced with which
the performance of PSO can be improved. However, the existing adaptive VL
strategies simply adjust their VL based on iterations, leading to
unsatisfactory optimization results because of the incompatibility between VL
and the current searching state of particles. To deal with this problem, a
novel PSO variant with state-based adaptive velocity limit strategy (PSO-SAVL)
is proposed. In the proposed PSO-SAVL, VL is adaptively adjusted based on the
evolutionary state estimation (ESE) in which a high value of VL is set for
global searching state and a low value of VL is set for local searching state.
Besides that, limit handling strategies have been modified and adopted to
improve the capability of avoiding local optima. The good performance of
PSO-SAVL has been experimentally validated on a wide range of benchmark
functions with 50 dimensions. The satisfactory scalability of PSO-SAVL in
high-dimension and large-scale problems is also verified. Besides, the merits
of the strategies in PSO-SAVL are verified in experiments. Sensitivity analysis
for the relevant hyper-parameters in state-based adaptive VL strategy is
conducted, and insights in how to select these hyper-parameters are also
discussed.Comment: 33 pages, 8 figure
Avoiding convergence in cooperative coevolution with novelty search
Cooperative coevolution is an approach for evolving solutions composed of coadapted components. Previous research
has shown, however, that cooperative coevolutionary algorithms are biased towards stability: they tend to converge
prematurely to equilibrium states, instead of converging to
optimal or near-optimal solutions. In single-population evolutionary algorithms, novelty search has been shown capable of avoiding premature convergence to local optima β
a pathology similar to convergence to equilibrium states.
In this study, we demonstrate how novelty search can be
applied to cooperative coevolution by proposing two new
algorithms. The first algorithm promotes behavioural novelty at the team level (NS-T), while the second promotes
novelty at the individual agent level (NS-I). The proposed
algorithms are evaluated in two popular multiagent tasks:
predator-prey pursuit and keepaway soccer. An analysis
of the explored collaboration space shows that (i) fitnessbased evolution tends to quickly converge to poor equilibrium states, (ii) NS-I almost never reaches any equilibrium
state due to constant change in the individual populations,
while (iii) NS-T explores a variety of equilibrium states in
each evolutionary run and thus significantly outperforms
both fitness-based evolution and NS-I.info:eu-repo/semantics/acceptedVersio
A two-level evolution strategy : balancing global and local search
Evolution Strategies apply mutation and recombination operators in order to create their offspring. Both operators have a different role in the evolution process: recombination should combine information of different individuals, while mutation performs a kind of random walk to introduce new values. In an ES these operators are always applied together, but their different roles suggest that it might be better to apply them independently and at different rates. In order to do so the ES has been split into two levels. The resulting Modular Evolution Strategy consists of a population of local optimizers and a distributed population manager. Both parts have their own specific role in the optimization process. As a result of its modularity this method can be adapted more easily to specific classes of numerical optimization problems, and introduction of adaptive mechanisms is relatively easy. A further interesting aspect about this algorithm is that it does not need any global communication, and therefore can be parallelized easily. Many problems can be expressed as numerical optimization problems. Especially when the dimension of the input space and the number of local optima is high these problems tend to be very difficult. In order to obtain an efficient solver one has to gather information regarding the function to be optimized. Evolution based learning can be used to obtain this information. This paper contains results obtained with the Modular Evolution Strategy and compares these results to those obtained with other evolution based method. The results look promising
- β¦