6,984 research outputs found
Adaptive intelligence applied to numerical optimisation
The article presents modification strategies theoretical comparison and experimental results achieved by adaptive heuristics applied to numerical optimisation of several non-constraint test functions. The aims of the study are to identify and compare how adaptive search heuristics behave within heterogeneous search space without retuning of the search parameters. The achieved results are summarised and analysed, which could be used for comparison to other methods and further investigation
A New Metaheuristic Bat-Inspired Algorithm
Metaheuristic algorithms such as particle swarm optimization, firefly
algorithm and harmony search are now becoming powerful methods for solving many
tough optimization problems. In this paper, we propose a new metaheuristic
method, the Bat Algorithm, based on the echolocation behaviour of bats. We also
intend to combine the advantages of existing algorithms into the new bat
algorithm. After a detailed formulation and explanation of its implementation,
we will then compare the proposed algorithm with other existing algorithms,
including genetic algorithms and particle swarm optimization. Simulations show
that the proposed algorithm seems much superior to other algorithms, and
further studies are also discussed.Comment: 10 pages, 2 figure
Compound particle swarm optimization in dynamic environments
Copyright @ Springer-Verlag Berlin Heidelberg 2008.Adaptation to dynamic optimization problems is currently receiving a growing interest as one of the most important applications of evolutionary algorithms. In this paper, a compound particle swarm optimization (CPSO) is proposed as a new variant of particle swarm optimization to enhance its performance in dynamic environments. Within CPSO, compound particles are constructed as a novel type of particles in the search space and their motions are integrated into the swarm. A special reflection scheme is introduced in order to explore the search space more comprehensively. Furthermore, some information preserving and anti-convergence strategies are also developed to improve the performance of CPSO in a new environment. An experimental study shows the efficiency of CPSO in dynamic environments.This work was supported by the Key Program
of the National Natural Science Foundation (NNSF) of China under Grant No. 70431003 and Grant No. 70671020, the Science Fund for Creative Research Group of NNSF of China under Grant No. 60521003, the National Science and Technology Support Plan of China under Grant No. 2006BAH02A09 and the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant No. EP/E060722/1
Recommended from our members
Incremental evolution strategy for function optimization
This paper presents a novel evolutionary approach for function optimization Incremental Evolution Strategy (IES). Two strategies are proposed. One is to evolve the input variables incrementally. The whole evolution consists of several phases and one more variable is focused in each phase. The number of phases is equal to the number of variables in maximum. Each phase is composed of two stages: in the single-variable evolution (SVE) stage, evolution is taken on one independent variable in a series of cutting planes; in the multi-variable evolving (MVE) stage, the initial population is formed by integrating the populations obtained by the SVE and the MVE in the last phase. And the evolution is taken on the incremented variable set. The other strategy is a hybrid of particle swarm optimization (PSO) and evolution strategy (ES). PSO is applied to adjust the cutting planes/hyper-planes (in SVEs/MVEs) while (1+1)-ES is applied to searching optima in the cutting planes/hyper-planes. The results of experiments show that the performance of IES is generally better than that of three other evolutionary algorithms, improved normal GA, PSO and SADE_CERAF, in the sense that IES finds solutions closer to the true optima and with more optimal objective values
Source bearing and steering-vector estimation using partially calibrated arrays
The problem of source direction-of-arrival (DOA) estimation using a sensor array is addressed, where some of the sensors are perfectly calibrated, while others are uncalibrated. An algorithm is proposed for estimating the source directions in addition to the estimation of unknown array parameters such as sensor gains and phases, as a way of performing array self-calibration. The cost function is an extension of the maximum likelihood (ML) criteria that were originally developed for DOA estimation with a perfectly calibrated array. A particle swarm optimization (PSO) algorithm is used to explore the high-dimensional problem space and find the global minimum of the cost function. The design of the PSO is a combination of the problem-independent kernel and some newly introduced problem-specific features such as search space mapping, particle velocity control, and particle position clipping. This architecture plus properly selected parameters make the PSO highly flexible and reusable, while being sufficiently specific and effective in the current application. Simulation results demonstrate that the proposed technique may produce more accurate estimates of the source bearings and unknown array parameters in a cheaper way as compared with other popular methods, with the root-mean-squared error (RMSE) approaching and asymptotically attaining the Cramer Rao bound (CRB) even in unfavorable conditions
Firefly Algorithm: Recent Advances and Applications
Nature-inspired metaheuristic algorithms, especially those based on swarm
intelligence, have attracted much attention in the last ten years. Firefly
algorithm appeared in about five years ago, its literature has expanded
dramatically with diverse applications. In this paper, we will briefly review
the fundamentals of firefly algorithm together with a selection of recent
publications. Then, we discuss the optimality associated with balancing
exploration and exploitation, which is essential for all metaheuristic
algorithms. By comparing with intermittent search strategy, we conclude that
metaheuristics such as firefly algorithm are better than the optimal
intermittent search strategy. We also analyse algorithms and their implications
for higher-dimensional optimization problems.Comment: 15 page
Forecasting foreign exchange rates with adaptive neural networks using radial basis functions and particle swarm optimization
The motivation for this paper is to introduce a hybrid Neural Network architecture of Particle
Swarm Optimization and Adaptive Radial Basis Function (ARBF-PSO), a time varying leverage
trading strategy based on Glosten, Jagannathan and Runkle (GJR) volatility forecasts and a
Neural Network fitness function for financial forecasting purposes. This is done by
benchmarking the ARBF-PSO results with those of three different Neural Networks
architectures, a Nearest Neighbors algorithm (k-NN), an autoregressive moving average model
(ARMA), a moving average convergence/divergence model (MACD) plus a naïve strategy.
More specifically, the trading and statistical performance of all models is investigated in a
forecast simulation of the EUR/USD, EUR/GBP and EUR/JPY ECB exchange rate fixing time
series over the period January 1999 to March 2011 using the last two years for out-of-sample
testing
- …