277 research outputs found

    Adaptive multimodal continuous ant colony optimization

    Get PDF
    Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization algorithms in preserving high diversity, this paper intends to extend ant colony optimization algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ant colony optimization algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima

    A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments

    Get PDF
    This article is posted here with permission from the IEEE - Copyright @ 2010 IEEEIn the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.This work was supported by the Engineering and Physical Sciences Research Council of U.K., under Grant EP/E060722/1

    A dynamic neighborhood learning-based gravitational search algorithm

    Get PDF
    Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named Kbest, which stores those superior agents after fitness sorting in each iteration. Since the global property of Kbest remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the Kbest model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA

    Niching particle swarm optimization based euclidean distance and hierarchical clustering for multimodal optimization

    Get PDF
    Abstract : Multimodal optimization is still one of the most challenging tasks in the evolutionary computation field, when multiple global and local optima need to be effectively and efficiently located. In this paper, a niching Particle Swarm Optimization (PSO) based Euclidean Distance and Hierarchical Clustering (EDHC) for multimodal optimization is proposed. This technique first uses the Euclidean distance based PSO algorithm to perform preliminarily search. In this phase, the particles are rapidly clustered around peaks. Secondly, hierarchical clustering is applied to identify and concentrate the particles distributed around each peak to finely search as a whole. Finally, a small world network topology is adopted in each niche to improve the exploitation ability of the algorithm. At the end of this paper, the proposed EDHC-PSO algorithm is applied to the Traveling Salesman Problems (TSP) after being discretized. The experiments demonstrate that the proposed method outperforms existing niching techniques on benchmark problems, and is effective for TSP

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness

    Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX)

    Full text link
    The modeling and analysis generic interface for external numerical codes (MAGIX) is a model optimizer developed under the framework of the coherent set of astrophysical tools for spectroscopy (CATS) project. The MAGIX package provides a framework of an easy interface between existing codes and an iterating engine that attempts to minimize deviations of the model results from available observational data, constraining the values of the model parameters and providing corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of ALMA (Atacama Large Millimeter Array), but can be used with other astronomical and with non-astronomical data.Comment: 12 pages, 15 figures, 2 tables, paper is also available at http://www.aanda.org/articles/aa/pdf/forth/aa20063-12.pd

    Towards Swarm Diversity: Random Sampling in Variable Neighborhoods Procedure Using a Lévy Distribution

    Get PDF
    Abstract. Particle Swarm Optimization (PSO) is a nondirect search method for numerical optimization. The key advantages of this metaheuristic are principally associated to its simplicity, few parameters and high convergence rate. In the canonical PSO using a fully connected topology, a particle adjusts its position by using two attractors: the best record stored for the current agent, and the best point discovered for the entire swarm. It leads to a high convergence rate, but also progressively deteriorates the swarm diversity. As a result, the particle swarm frequently gets attracted by sub-optimal points. Once the particles have been attracted to a local optimum, they continue the search process within a small region of the solution space, thus reducing the algorithm exploration. To deal with this issue, this paper presents a variant of the Random Sampling in Variable Neighborhoods (RSVN) procedure using a Lévy distribution, which is able to notably improve the PSO search ability in multimodal problems. Keywords. Swarm diversity, local optima, premature convergence, RSVN procedure, Lévy distribution. Hacia la diversidad de la bandada: procedimiento RSVN usando una distribución de Lévy Resumen. Particle Swarm Optimization (PSO) es un método de búsqueda no directo para la optimización numérica. Las principales ventajas de esta metaheurística están relacionadas principalmente con su simplicidad, pocos parámetros y alta tasa de convergencia. En el PSO canónico usando una topología totalmente conectada, una partícula ajusta su posición usando dos atractores: el mejor registro almacenado por el individuo y el mejor punto descubierto por la bandada completa. Este esquema conduce a un alto factor de convergencia, pero también deteriora la diversidad de la población progresivamente. Como resultado la bandada de partículas frecuentemente es atraída por puntos subóptimos. Una vez que las partículas han sido atraídas hacia un óptimo local, ellas continúan el proceso de búsqueda dentro de una región muy pequeña del espacio de soluciones, reduciendo las capacidades de exploración del algoritmo. Para tratar esta situación este artículo presenta una variante del procedimiento Random Sampling in Variable Neighborhoods (RSVN) usando una distribución de Lévy. Este algoritmo es capaz de mejorar notablemente la capacidad de búsqueda de los algoritmos PSO en problemas multimodales de optimización. Palabras clave. Diversidad de la bandada, óptimos locales, convergencia prematura, procedimiento RSVN, distribución de Lévy

    SmartSwarm - A Multi-Agent Reinforcement Learning based Particle Swarm Optimization Algorithm

    Get PDF
    Particle Swarm Optimization is a renowned continuous optimization method that utilizes Swarm Intelligence to find solutions to complex non-linear optimization problems efficiently. Since its proposal, many developments have been put forward to improve its capabilities by enhancing the stochastic and tunable component of the algorithm. This thesis introduces SmartSwarm, a variant of Particle Swarm Optimization that utilizes Multi-Agent Reinforcement Learning to control the velocity of a swarm of particles. This framework has the capability of incorporating domain-specific information in the optimization process, as well as adapting a self-taught velocity function. We show how this framework has the ability to discover a velocity function to maximize the performance of the algorithm.Masteroppgave i informatikkINF399MAMN-PROGMAMN-IN
    corecore