221,595 research outputs found
An adaptive learning particle swarm optimizer for function optimization
This article is posted here with permission of the IEEE - Copyright @ 2009 IEEETraditional particle swarm optimization (PSO) suffers from the premature convergence problem, which usually results in PSO being trapped in local optima. This paper presents an adaptive learning PSO (ALPSO) based on a variant PSO learning strategy. In ALPSO, the learning mechanism of each particle is separated into three parts: its own historical best position, the closest neighbor and the global best one. By using this individual level adaptive technique, a particle can well guide its behavior of exploration and exploitation. A set of 21 test functions were used including un-rotated, rotated and composition functions to test the performance of ALPSO. From the comparison results over several variant PSO algorithms, ALPSO shows an outstanding performance on most test functions, especially the fast convergence characteristic.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1
A general framework of multi-population methods with clustering in undetectable dynamic environments
Copyright @ 2011 IEEETo solve dynamic optimization problems, multiple
population methods are used to enhance the population diversity for an algorithm with the aim of maintaining multiple populations in different sub-areas in the fitness landscape. Many experimental studies have shown that locating and tracking multiple relatively good optima rather than a single global optimum is an effective idea in dynamic environments. However, several challenges need to be addressed when multi-population methods are applied, e.g.,
how to create multiple populations, how to maintain them in different sub-areas, and how to deal with the situation where changes can not be detected or predicted. To address these issues, this paper investigates a hierarchical clustering method to locate and track multiple optima for dynamic optimization problems. To deal with undetectable dynamic environments, this
paper applies the random immigrants method without change detection based on a mechanism that can automatically reduce redundant individuals in the search space throughout the run. These methods are implemented into several research areas, including particle swarm optimization, genetic algorithm, and differential evolution. An experimental study is conducted based on the moving peaks benchmark to test the performance with several other algorithms from the literature. The experimental
results show the efficiency of the clustering method for locating and tracking multiple optima in comparison with other algorithms based on multi-population methods on the moving peaks
benchmark
A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments
This article is posted here with permission from the IEEE - Copyright @ 2010 IEEEIn the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.This work was supported by the Engineering and Physical Sciences Research Council of U.K., under Grant EP/E060722/1
Adaptive learning particle swarm optimizer-II for global optimization
Copyright @ 2010 IEEE.This paper presents an updated version of the adaptive learning particle swarm optimizer (ALPSO), we call it ALPSO-II. In order to improve the performance of ALPSO on multi-modal problems, we introduce several new major features in ALPSO-II: (i) Adding particle's status monitoring mechanism, (ii) controlling the number of particles that learn from the global best position, and (iii) updating two of the four learning operators used in ALPSO. To test the performance of ALPSO-II, we choose a set of 27 test problems, including un-rotated, shifted, rotated, rotated shifted, and composition functions in comparison of the ALPSO algorithm as well as several state-of-the-art variant PSO algorithms. The experimental results show that ALPSO-II has a great improvement of the ALPSO algorithm, it also outperforms the other peer algorithms on most test problems in terms of both the convergence speed and solution accuracy.This work was sponsored by the Engineering and Physical Sciences research Council (EPSRC) of UK under grant number EP/E060722/1
Fast multi-swarm optimization for dynamic optimization problems
This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn the real world, many applications are non-stationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems
A clustering particle swarm optimizer for dynamic optimization
This article is posted here with permission of the IEEE - Copyright @ 2009 IEEEIn the real world, many applications are nonstationary optimization problems. This requires that optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a clustering particle swarm optimizer (CPSO) for dynamic optimization problems. The algorithm employs hierarchical clustering method to track multiple peaks based on a nearest neighbor search strategy. A fast local search method is also proposed to find the near optimal solutions in a local promising region in the search space. Six test problems generated from a generalized dynamic benchmark generator (GDBG) are used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for locating and tracking multiple optima in dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1
A sequence based genetic algorithm with local search for the travelling salesman problem
The standard Genetic Algorithm often suffers from slow convergence for solving combinatorial optimization problems. In this study, we present a sequence based genetic algorithm (SBGA) for the symmetric travelling salesman problem (TSP). In our proposed method, a set of sequences are extracted from the best individuals, which are used to guide the search of SBGA. Additionally, some procedures are applied to maintain the diversity by breaking the selected sequences into sub tours if the best individual of the population does not improve. SBGA is compared with the inver-over operator, a state-of-the-art algorithm for the TSP, on a set of benchmark TSPs. Experimental results show that the convergence speed of SBGA is very promising and much faster than that of the inver-over algorithm and that SBGA achieves a similar solution quality on all test TSPs
Comment on "Fock-Darwin States of Dirac Electrons in Graphene-Based Artificial Atoms"
Chen, Apalkov, and Chakraborty (Phys. Rev. Lett. 98, 186803 (2007)) have
computed Fock-Darwin levels of a graphene dot by including only basis states
with energies larger than or equal to zero. We show that their results violate
the Hellman-Feynman theorem. A correct treatment must include both positive and
negative energy basis states. Additional basis states lead to new energy levels
in the optical spectrum and anticrossings between optical transition lines.Comment: 1 page, 1 figure, accepted for publication in PR
Ground State Energy for Fermions in a 1D Harmonic Trap with Delta Function Interaction
Conjectures are made for the ground state energy of a large spin 1/2 Fermion
system trapped in a 1D harmonic trap with delta function interaction. States
with different spin J are separately studied. The Thomas-Fermi method is used
as an effective test for the conjecture.Comment: 4 pages, 3 figure
- …