369,916 research outputs found

    An adaptive learning particle swarm optimizer for function optimization

    Get PDF
    This article is posted here with permission of the IEEE - Copyright @ 2009 IEEETraditional particle swarm optimization (PSO) suffers from the premature convergence problem, which usually results in PSO being trapped in local optima. This paper presents an adaptive learning PSO (ALPSO) based on a variant PSO learning strategy. In ALPSO, the learning mechanism of each particle is separated into three parts: its own historical best position, the closest neighbor and the global best one. By using this individual level adaptive technique, a particle can well guide its behavior of exploration and exploitation. A set of 21 test functions were used including un-rotated, rotated and composition functions to test the performance of ALPSO. From the comparison results over several variant PSO algorithms, ALPSO shows an outstanding performance on most test functions, especially the fast convergence characteristic.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1

    A general framework of multi-population methods with clustering in undetectable dynamic environments

    Get PDF
    Copyright @ 2011 IEEETo solve dynamic optimization problems, multiple population methods are used to enhance the population diversity for an algorithm with the aim of maintaining multiple populations in different sub-areas in the fitness landscape. Many experimental studies have shown that locating and tracking multiple relatively good optima rather than a single global optimum is an effective idea in dynamic environments. However, several challenges need to be addressed when multi-population methods are applied, e.g., how to create multiple populations, how to maintain them in different sub-areas, and how to deal with the situation where changes can not be detected or predicted. To address these issues, this paper investigates a hierarchical clustering method to locate and track multiple optima for dynamic optimization problems. To deal with undetectable dynamic environments, this paper applies the random immigrants method without change detection based on a mechanism that can automatically reduce redundant individuals in the search space throughout the run. These methods are implemented into several research areas, including particle swarm optimization, genetic algorithm, and differential evolution. An experimental study is conducted based on the moving peaks benchmark to test the performance with several other algorithms from the literature. The experimental results show the efficiency of the clustering method for locating and tracking multiple optima in comparison with other algorithms based on multi-population methods on the moving peaks benchmark

    A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments

    Get PDF
    This article is posted here with permission from the IEEE - Copyright @ 2010 IEEEIn the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.This work was supported by the Engineering and Physical Sciences Research Council of U.K., under Grant EP/E060722/1

    Adaptive learning particle swarm optimizer-II for global optimization

    Get PDF
    Copyright @ 2010 IEEE.This paper presents an updated version of the adaptive learning particle swarm optimizer (ALPSO), we call it ALPSO-II. In order to improve the performance of ALPSO on multi-modal problems, we introduce several new major features in ALPSO-II: (i) Adding particle's status monitoring mechanism, (ii) controlling the number of particles that learn from the global best position, and (iii) updating two of the four learning operators used in ALPSO. To test the performance of ALPSO-II, we choose a set of 27 test problems, including un-rotated, shifted, rotated, rotated shifted, and composition functions in comparison of the ALPSO algorithm as well as several state-of-the-art variant PSO algorithms. The experimental results show that ALPSO-II has a great improvement of the ALPSO algorithm, it also outperforms the other peer algorithms on most test problems in terms of both the convergence speed and solution accuracy.This work was sponsored by the Engineering and Physical Sciences research Council (EPSRC) of UK under grant number EP/E060722/1

    Fast multi-swarm optimization for dynamic optimization problems

    Get PDF
    This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn the real world, many applications are non-stationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems

    A clustering particle swarm optimizer for dynamic optimization

    Get PDF
    This article is posted here with permission of the IEEE - Copyright @ 2009 IEEEIn the real world, many applications are nonstationary optimization problems. This requires that optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a clustering particle swarm optimizer (CPSO) for dynamic optimization problems. The algorithm employs hierarchical clustering method to track multiple peaks based on a nearest neighbor search strategy. A fast local search method is also proposed to find the near optimal solutions in a local promising region in the search space. Six test problems generated from a generalized dynamic benchmark generator (GDBG) are used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for locating and tracking multiple optima in dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1

    A sequence based genetic algorithm with local search for the travelling salesman problem

    Get PDF
    The standard Genetic Algorithm often suffers from slow convergence for solving combinatorial optimization problems. In this study, we present a sequence based genetic algorithm (SBGA) for the symmetric travelling salesman problem (TSP). In our proposed method, a set of sequences are extracted from the best individuals, which are used to guide the search of SBGA. Additionally, some procedures are applied to maintain the diversity by breaking the selected sequences into sub tours if the best individual of the population does not improve. SBGA is compared with the inver-over operator, a state-of-the-art algorithm for the TSP, on a set of benchmark TSPs. Experimental results show that the convergence speed of SBGA is very promising and much faster than that of the inver-over algorithm and that SBGA achieves a similar solution quality on all test TSPs

    On the QCD corrections to the charged Higgs decay of a heavy quark

    Full text link
    Using dimensional regularization for both infrared and ultraviolet divergences, we confirm that the QCD corrections to the decay width Γ(tH+b)\Gamma(t\to H^+b) are equal to those to Γ(tW+b)\Gamma(t\to W^+b) in the limit of a large tt quark mass.Comment: 6 pages, report Alberta Thy-25-9

    QCD corrections to the t-->H+b decay within the minimal supersymmetric standard model

    Full text link
    I present the contribution of gluinos and scalar quarks to the decay rate of the top quark into a charged Higgs boson and a bottom quark within the minimal supersymmetric standard model, including the mixing of the scalar partners of the left- and right-handed top quark. I show that for certain values of the supersymmetric parameters the standard QCD loop corrections to this decay mode are diminished or enhanced by several 10 per cent. I show that not only a small value of 3 GeV for the gluino mass (small mass window) but also much larger values of several hundreds of GeV's have a non-neglible effect on this decay rate, against general belief. Last but not least, if the ratio of the vacuum expectation values of the Higgs bosons are taken in the limit of v1v2v_1\ll v_2 I obtain a drastic enhancement due to a tanβ\tan\beta\ dependence in the couplings.Comment: UQAM-PHE-94/01, 6 pages, plain tex, 4 figures not included, available under request via mail or fa
    corecore