3,099 research outputs found
Towards a Better Understanding of the Local Attractor in Particle Swarm Optimization: Speed and Solution Quality
Particle Swarm Optimization (PSO) is a popular nature-inspired meta-heuristic
for solving continuous optimization problems. Although this technique is widely
used, the understanding of the mechanisms that make swarms so successful is
still limited. We present the first substantial experimental investigation of
the influence of the local attractor on the quality of exploration and
exploitation. We compare in detail classical PSO with the social-only variant
where local attractors are ignored. To measure the exploration capabilities, we
determine how frequently both variants return results in the neighborhood of
the global optimum. We measure the quality of exploitation by considering only
function values from runs that reached a search point sufficiently close to the
global optimum and then comparing in how many digits such values still deviate
from the global minimum value. It turns out that the local attractor
significantly improves the exploration, but sometimes reduces the quality of
the exploitation. As a compromise, we propose and evaluate a hybrid PSO which
switches off its local attractors at a certain point in time. The effects
mentioned can also be observed by measuring the potential of the swarm
An Optimisation-Driven Prediction Method for Automated Diagnosis and Prognosis
open access articleThis article presents a novel hybrid classification paradigm for medical diagnoses and prognoses prediction. The core mechanism of the proposed method relies on a centroid classification algorithm whose logic is exploited to formulate the classification task as a real-valued optimisation problem. A novel metaheuristic combining the algorithmic structure of Swarm Intelligence optimisers with the probabilistic search models of Estimation of Distribution Algorithms is designed to optimise such a problem, thus leading to high-accuracy predictions. This method is tested over 11 medical datasets and compared against 14 cherry-picked classification algorithms. Results show that the proposed approach is competitive and superior to the state-of-the-art on several occasions
Multimodal estimation of distribution algorithms
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima
Recommended from our members
A comparison of general-purpose optimization algorithms forfinding optimal approximate experimental designs
Several common general purpose optimization algorithms are compared for findingA- and D-optimal designs for different types of statistical models of varying complexity,including high dimensional models with five and more factors. The algorithms of interestinclude exact methods, such as the interior point method, the Nelder–Mead method, theactive set method, the sequential quadratic programming, and metaheuristic algorithms,such as particle swarm optimization, simulated annealing and genetic algorithms.Several simulations are performed, which provide general recommendations on theutility and performance of each method, including hybridized versions of metaheuristicalgorithms for finding optimal experimental designs. A key result is that general-purposeoptimization algorithms, both exact methods and metaheuristic algorithms, perform wellfor finding optimal approximate experimental designs
Cuckoo Search Inspired Hybridization of the Nelder-Mead Simplex Algorithm Applied to Optimization of Photovoltaic Cells
A new hybridization of the Cuckoo Search (CS) is developed and applied to
optimize multi-cell solar systems; namely multi-junction and split spectrum
cells. The new approach consists of combining the CS with the Nelder-Mead
method. More precisely, instead of using single solutions as nests for the CS,
we use the concept of a simplex which is used in the Nelder-Mead algorithm.
This makes it possible to use the flip operation introduces in the Nelder-Mead
algorithm instead of the Levy flight which is a standard part of the CS. In
this way, the hybridized algorithm becomes more robust and less sensitive to
parameter tuning which exists in CS. The goal of our work was to optimize the
performance of multi-cell solar systems. Although the underlying problem
consists of the minimization of a function of a relatively small number of
parameters, the difficulty comes from the fact that the evaluation of the
function is complex and only a small number of evaluations is possible. In our
test, we show that the new method has a better performance when compared to
similar but more compex hybridizations of Nelder-Mead algorithm using genetic
algorithms or particle swarm optimization on standard benchmark functions.
Finally, we show that the new method outperforms some standard meta-heuristics
for the problem of interest
Adaptive multimodal continuous ant colony optimization
Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization algorithms in preserving high diversity, this paper intends to extend ant colony optimization algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ant colony optimization algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima
Experimental Comparisons of Derivative Free Optimization Algorithms
In this paper, the performances of the quasi-Newton BFGS algorithm, the
NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution
Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm
Optimizers (PSO) are compared experimentally on benchmark functions reflecting
important challenges encountered in real-world optimization problems.
Dependence of the performances in the conditioning of the problem and
rotational invariance of the algorithms are in particular investigated.Comment: 8th International Symposium on Experimental Algorithms, Dortmund :
Germany (2009
A self-learning particle swarm optimizer for global optimization problems
Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2
- …