1,352 research outputs found

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Chaotic Quantum Double Delta Swarm Algorithm using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues

    Full text link
    Quantum Double Delta Swarm (QDDS) Algorithm is a new metaheuristic algorithm inspired by the convergence mechanism to the center of potential generated within a single well of a spatially co-located double-delta well setup. It mimics the wave nature of candidate positions in solution spaces and draws upon quantum mechanical interpretations much like other quantum-inspired computational intelligence paradigms. In this work, we introduce a Chebyshev map driven chaotic perturbation in the optimization phase of the algorithm to diversify weights placed on contemporary and historical, socially-optimal agents' solutions. We follow this up with a characterization of solution quality on a suite of 23 single-objective functions and carry out a comparative analysis with eight other related nature-inspired approaches. By comparing solution quality and successful runs over dynamic solution ranges, insights about the nature of convergence are obtained. A two-tailed t-test establishes the statistical significance of the solution data whereas Cohen's d and Hedge's g values provide a measure of effect sizes. We trace the trajectory of the fittest pseudo-agent over all function evaluations to comment on the dynamics of the system and prove that the proposed algorithm is theoretically globally convergent under the assumptions adopted for proofs of other closely-related random search algorithms.Comment: 27 pages, 4 figures, 19 table

    A Hybrid PSO Based on Dynamic Clustering for Global Optimization

    Get PDF
    Particle swarm optimization is a population-based global search method, and known to suffer from premature convergence prior to discovering the true global minimizer for global optimization problems. Taking balance of local intensive exploitation and global exploration into account, a novel algorithm is presented in the paper, called dynamic clustering hybrid particle swarm optimization (DC-HPSO). In the method, particles are constantly and dynamically clustered into several groups (sub-swarms) corresponding to promising sub-regions in terms of similarity of their generalized particles. In each group, a dominant particle is chosen to take responsibility for local intensive exploitation, while the rest are responsible for exploration by maintaining diversity of the swarm. The simultaneous perturbation stochastic approximation (SPSA) is introduced into our work in order to guarantee the implementation of exploitation and the standard PSO is modified for exploration. The experimental results show the efficiency of the proposed algorithm in comparison with several other peer algorithms

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness

    Optimal control problems solved via swarm intelligence

    Get PDF
    Questa tesi descrive come risolvere problemi di controllo ottimo tramite swarm in telligence. Grande enfasi viene posta circa la formulazione del problema di controllo ottimo, in particolare riguardo a punti fondamentali come l’identificazione delle incognite, la trascrizione numerica e la scelta del risolutore per la programmazione non lineare. L’algoritmo Particle Swarm Optimization viene preso in considerazione e la maggior parte dei problemi proposti sono risolti utilizzando una formulazione differential flatness. Quando viene usato l’approccio di dinamica inversa, il problema di ottimo relativo ai parametri di trascrizione è risolto assumendo che le traiettorie da identificare siano approssimate con curve B-splines. La tecnica Inverse-dynamics Particle Swarm Optimization, che viene impiegata nella maggior parte delle applicazioni numeriche di questa tesi, è una combinazione del Particle Swarm e della formulazione differential flatness. La tesi investiga anche altre opportunità di risolvere problemi di controllo ottimo tramite swarm intelligence, per esempio usando un approccio di dinamica diretta e imponendo a priori le condizioni necessarie di ottimalitá alla legge di controllo. Per tutti i problemi proposti, i risultati sono analizzati e confrontati con altri lavori in letteratura. Questa tesi mostra quindi the algoritmi metaeuristici possono essere usati per risolvere problemi di controllo ottimo, ma soluzioni ottime o quasi-ottime possono essere ottenute al variare della formulazione del problema.This thesis deals with solving optimal control problems via swarm intelligence. Great emphasis is given to the formulation of the optimal control problem regarding fundamental issues such as unknowns identification, numerical transcription and choice of the nonlinear programming solver. The Particle Swarm Optimization is taken into account, and most of the proposed problems are solved using a differential flatness formulation. When the inverse-dynamics approach is used, the transcribed parameter optimization problem is solved assuming that the unknown trajectories are approximated with B-spline curves. The Inverse-dynamics Particle Swarm Optimization technique, which is employed in the majority of the numerical applications in this work, is a combination of Particle Swarm and differential flatness formulation. This thesis also investigates other opportunities to solve optimal control problems with swarm intelligence, for instance using a direct dynamics approach and imposing a-priori the necessary optimality conditions to the control policy. For all the proposed problems, results are analyzed and compared with other works in the literature. This thesis shows that metaheuristic algorithms can be used to solve optimal control problems, but near-optimal or optimal solutions can be attained depending on the problem formulation

    Robust optimization in HTS cable based on DEPSO and design for six sigma

    Full text link
    The non-uniform AC current distribution among the multi-layer conductors in a high-temperature superconducting (HTS) cable reduces the current capacity and increases the AC loss. In this paper, Particle swarm optimization coupled with differential evolution operator (DEPSO) has been applied in structural optimization of HTS cables. While the existence of fluctuation in design variables or operation conditions has a great influence on the cable quality, in order to eliminate the effects of parameter perturbations in design and improve the design efficiency, a robust design method based on design for six sigma (DFSS) is applied in this paper. The optimization solutions show that the proposed optimization procedure can not only achieve a uniform current distribution, but also improve significantly the reliability and robustness of the HTS cable quality. © 2008 IEEE
    corecore