123 research outputs found

    A new approach to particle swarm optimization algorithm

    Get PDF
    Particularly interesting group consists of algorithms that implement co-evolution or co-operation in natural environments, giving much more powerful implementations. The main aim is to obtain the algorithm which operation is not influenced by the environment. An unusual look at optimization algorithms made it possible to develop a new algorithm and its metaphors define for two groups of algorithms. These studies concern the particle swarm optimization algorithm as a model of predator and prey. New properties of the algorithm resulting from the co-operation mechanism that determines the operation of algorithm and significantly reduces environmental influence have been shown. Definitions of functions of behavior scenarios give new feature of the algorithm. This feature allows self controlling the optimization process. This approach can be successfully used in computer games. Properties of the new algorithm make it worth of interest, practical application and further research on its development. This study can be also an inspiration to search other solutions that implementing co-operation or co-evolution.Angeline, P. (1998). Using selection to improve particle swarm optimization. In Proceedings of the IEEE congress on evolutionary computation, Anchorage (pp. 84–89).Arquilla, J., & Ronfeldt, D. (2000). Swarming and the future of conflict, RAND National Defense Research Institute, Santa Monica, CA, US.Bessaou, M., & Siarry, P. (2001). A genetic algorithm with real-value coding to optimize multimodal continuous functions. Structural and Multidiscipline Optimization, 23, 63–74.Bird, S., & Li, X. (2006). Adaptively choosing niching parameters in a PSO. In Proceedings of the 2006 genetic and evolutionary computation conference (pp. 3–10).Bird, S., & Li, X. (2007). Using regression to improve local convergence. In Proceedings of the 2007 IEEE congress on evolutionary computation (pp. 592–599).Blackwell, T., & Bentley, P. (2002). Dont push me! Collision-avoiding swarms. In Proceedings of the IEEE congress on evolutionary computation, Honolulu (pp. 1691–1696).Brits, R., Engelbrecht, F., & van den Bergh, A. P. (2002). Solving systems of unconstrained equations using particle swarm optimization. In Proceedings of the 2002 IEEE conference on systems, man, and cybernetics (pp. 102–107).Brits, R., Engelbrecht, A., & van den Bergh, F. (2002). A niching particle swarm optimizer. In Proceedings of the fourth asia-pacific conference on simulated evolution and learning (pp. 692–696).Chelouah, R., & Siarry, P. (2000). A continuous genetic algorithm designed for the global optimization of multimodal functions. Journal of Heuristics, 6(2), 191–213.Chelouah, R., & Siarry, P. (2000). Tabu search applied to global optimization. European Journal of Operational Research, 123, 256–270.Chelouah, R., & Siarry, P. (2003). Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multiminima function. European Journal of Operational Research, 148(2), 335–348.Chelouah, R., & Siarry, P. (2005). A hybrid method combining continuous taboo search and Nelder–Mead simplex algorithms for the global optimization of multiminima functions. European Journal of Operational Research, 161, 636–654.Chen, T., & Chi, T. (2010). On the improvements of the particle swarm optimization algorithm. Advances in Engineering Software, 41(2), 229–239.Clerc, M., & Kennedy, J. (2002). The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73.Fan, H., & Shi, Y. (2001). Study on Vmax of particle swarm optimization. In Proceedings of the workshop particle swarm optimization, Indianapolis.Gao, H., & Xu, W. (2011). Particle swarm algorithm with hybrid mutation strategy. Applied Soft Computing, 11(8), 5129–5142.Gosciniak, I. (2008). Immune algorithm in non-stationary optimization task. In Proceedings of the 2008 international conference on computational intelligence for modelling control & automation, CIMCA ’08 (pp. 750–755). Washington, DC, USA: IEEE Computer Society.He, Q., & Wang, L. (2007). An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Engineering Applications of Artificial Intelligence, 20(1), 89–99.Higashitani, M., Ishigame, A., & Yasuda, K., (2006). Particle swarm optimization considering the concept of predator–prey behavior. In 2006 IEEE congress on evolutionary computation (pp. 434–437).Higashitani, M., Ishigame, A., & Yasuda, K. (2008). Pursuit-escape particle swarm optimization. IEEJ Transactions on Electrical and Electronic Engineering, 3(1), 136–142.Hu, X., & Eberhart, R. (2002). Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Proceedings of the evolutionary computation on 2002. CEC ’02. Proceedings of the 2002 congress (Vol. 02, pp. 1677–1681). Washington, DC, USA: IEEE Computer Society.Hu, X., Eberhart, R., & Shi, Y. (2003). Engineering optimization with particle swarm. In IEEE swarm intelligence symposium, SIS 2003 (pp. 53–57). Indianapolis: IEEE Neural Networks Society.Jang, W., Kang, H., Lee, B., Kim, K., Shin, D., & Kim, S. (2007). Optimized fuzzy clustering by predator prey particle swarm optimization. In IEEE congress on evolutionary computation, CEC2007 (pp. 3232–3238).Kennedy, J. (2000). Stereotyping: Improving particle swarm performance with cluster analysis. In Proceedings of the 2000 congress on evolutionary computation (pp. 1507–1512).Kennedy, J., & Mendes, R. (2002). Population structure and particle swarm performance. In IEEE congress on evolutionary computation (pp. 1671–1676).Kuo, H., Chang, J., & Shyu, K. (2004). A hybrid algorithm of evolution and simplex methods applied to global optimization. Journal of Marine Science and Technology, 12(4), 280–289.Leontitsis, A., Kontogiorgos, D., & Pange, J. (2006). Repel the swarm to the optimum. Applied Mathematics and Computation, 173(1), 265–272.Li, X. (2004). Adaptively choosing neighborhood bests using species in a particle swarm optimizer for multimodal function optimization. In Proceedings of the 2004 genetic and evolutionary computation conference (pp. 105–116).Li, C., & Yang, S. (2009). A clustering particle swarm optimizer for dynamic optimization. In Proceedings of the 2009 congress on evolutionary computation (pp. 439–446).Liang, J., Suganthan, P., & Deb, K. (2005). Novel composition test functions for numerical global optimization. In Proceedings of the swarm intelligence symposium [Online]. Available: .Liang, J., Qin, A., Suganthan, P., & Baskar, S. (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation, 10(3), 281–295.Lovbjerg, M., & Krink, T. (2002). Extending particle swarm optimizers with self-organized criticality. In Proceedings of the congress on evolutionary computation, Honolulu (pp. 1588–1593).Lung, R., & Dumitrescu, D. (2007). A collaborative model for tracking optima in dynamic environments. In Proceedings of the 2007 congress on evolutionary computation (pp. 564–567).Mendes, R., Kennedy, J., & Neves, J. (2004). The fully informed particle swarm: simpler, maybe better. IEEE Transaction on Evolutionary Computation, 8(3), 204–210.Miranda, V., & Fonseca, N. (2002). New evolutionary particle swarm algorithm (EPSO) applied to voltage/VAR control. In Proceedings of the 14th power systems computation conference, Seville, Spain [Online] Available: .Parrott, D., & Li, X. (2004). A particle swarm model for tracking multiple peaks in a dynamic environment using speciation. In Proceedings of the 2004 congress on evolutionary computation (pp. 98–103).Parrott, D., & Li, X. (2006). Locating and tracking multiple dynamic optima by a particle swarm model using speciation. In IEEE transaction on evolutionary computation (Vol. 10, pp. 440–458).Parsopoulos, K., & Vrahatis, M. (2004). UPSOA unified particle swarm optimization scheme. Lecture Series on Computational Sciences, 868–873.Passaroand, A., & Starita, A. (2008). Particle swarm optimization for multimodal functions: A clustering approach. Journal of Artificial Evolution and Applications, 2008, 15 (Article ID 482032).Peram, T., Veeramachaneni, K., & Mohan, C. (2003). Fitness-distance-ratio based particle swarm optimization. In Swarm intelligence symp. (pp. 174–181).Sedighizadeh, D., & Masehian, E. (2009). Particle swarm optimization methods, taxonomy and applications. International Journal of Computer Theory and Engineering, 1(5), 1793–8201.Shi, Y., & Eberhart, R. (2001). Particle swarm optimization with fuzzy adaptive inertia weight. In Proceedings of the workshop particle swarm optimization, Indianapolis (pp. 101–106).Shi, Y., & Eberhart, R. (1998). A modified particle swarm optimizer. In Proceedings of IEEE International Conference on Evolutionary Computation (pp. 69–73). Washington, DC, USA: IEEE Computer Society.Thomsen, R. (2004). Multimodal optimization using crowding-based differential evolution. In Proceedings of the 2004 congress on evolutionary computation (pp. 1382–1389).Trojanowski, K., & Wierzchoń, S. (2009). Immune-based algorithms for dynamic optimization. Information Sciences, 179(10), 1495–1515.Tsoulos, I., & Stavrakoudis, A. (2010). Enhancing PSO methods for global optimization. Applied Mathematics and Computation, 216(10), 2988–3001.van den Bergh, F., & Engelbrecht, A. (2004). A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8, 225–239.Wolpert, D., & Macready, W. (1997). No free lunch theorems for optimization. IEEE Transaction on Evolutionary Computation, 1(1), 67–82.Xie, X., Zhang, W., & Yang, Z. (2002). Dissipative particle swarm optimization. In Proceedings of the congress on evolutionary computation (pp. 1456–1461).Yang, S., & Li, C. (2010). A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments. In IEEE Trans. on evolutionary computation (Vol. 14, pp. 959–974).Kuo, H., Chang, J., & Liu, C. (2006). Particle swarm optimization for global optimization problems. Journal of Marine Science and Technology, 14(3), 170–181

    A Novel Hybrid Prairie Dog Optimization Algorithm - Marine Predator Algorithm for Tuning Parameters Power System Stabilizer

    Get PDF
    The article presents the parameter tuning of the Power System Stabilizer (PSS) using the hybrid method. The hybrid methods proposed in this article are Praire Dog Optimization (PDO) and Marine Predator Algorithm (MPA). The proposed method can be called PDOMPA. In the PDOMPA method, the marine predator algorithm (MPA) is able to search around optimal individuals when updating population positions. MPA is used to make the exploration and exploitation stages of PDO more valid and accurate. PDO is an algorithm inspired by the life of prairie dogs. Prairie dogs are adapted to colonizing in burrows underground. Prairie dogs have daily habits of eating, observing for predators, establishing fresh burrows, or preserving existing ones. Meanwhile, MPA is a duplication of marine predator life which is modeled mathematically. In order to validate the performance of the PDOMPA method, this article presents a comparative simulation of the objective function and the transient response of PSS. This research uses validation by comparing with conventional methods, Whale Optimization Algorithm (WOA), Grasshopper Optimization Algorithm (GOA), Marine Predator Algorithm (MPA), and Praire Dog Optimization (PDO). Based on the simulation results, PDOMPA presents fast convergence in some cases and shows optimal results compared to competitive algorithms. From the simulation results using load variations, it was found that the proposed method has the ability to reduce the average undershoot and overshoot of speed by 42.2% and 85.37% compared to the PSS-Lead Lag method. Meanwhile the average settling time value of speed is 50.7%

    大域的最適化問題のための同種および異種粒子群最適化法の研究

    Get PDF
    The premature convergence problem and the exploration-exploitation trade-off problem are the two major problems encountered by many swarm intelligence algorithms in both global optimization and large scale global optimization. This thesis proposes that the two main problems could be handled by several variants of Particle Swarm Optimization (PSO) developed below. Five variants of homogeneous PSO have been developed for multimodal and large scale global optimization problems, and two variants of dynamic heterogeneous PSO for complex real-world problems.First of all, an individual competition strategy is proposed for the new variant of PSO, namely Fitness Predator Optimization (FPO), for multimodal problems. The development of individual competition plays an important role for the diversity conservation in the population, which is crucial for preventing premature convergence in multimodal optimization.To enhance the global exploration capability of the FPO algorithm for high multimodality problems, a modified paralleled virtual team approach is developed for FPO, namely DFPO. The main function of this dynamic virtual team is to build a paralleled information-exchange system, strengthening the swarm\u27s global searching effectiveness. Furthermore, the strategy of team size selection is defined in DFPO named as DFPO-r, which based on the fact that a dynamic virtual team with a higher degree of population diversity is able to help DFPO-ralleviate the premature convergence and strengthen the global exploration simultaneously. Experimental results demonstrate that both DFPO-r and DFPO have desirable performances for multimodal functions. In addition, DFPO-r has a more robust performance in most cases compared with DFPO.Using hybrid algorithms to deal with specific real-world problems is one of the most interesting trends in the last years. In this thesis, we extend the FPO algorithm for fuzzy clustering optimization problem. Thus, a combination of FPO with FCM (FPO-FCM) algorithm is proposed to avoid the premature convergence and improve the performance of FCM.To handle the large scale global optimization problem, a variant of modified BBPSO algorithm incorporation of Differential Evolution (DE) approach, namely BBPSO-DE, is developed to improve the swarm\u27s global search capability as the dimensionality of the search space increases. To the best of our knowledge, the Static Heterogeneous PSO (SHPSO) has been studied by some researchers, while the Dynamic Heterogeneous PSO (DHPSO) is seldom systematically investigated based on real problems. In this thesis, two variants of dynamic Heterogeneous PSO, namely DHPSO-d and DHPSO-p are proposed for complex real-world problems. In DHPSO-d, several differential update rules are proposed for different particles by the trigger event. When the global best position p_g is considered stagnant and the event is confirmed, then p_g is reset and all particles update their positions only by their personal experience. In DHPSO-p, two proposed types of topology models provide the particles different mechanism choosing their informers when the swarm being trapped in the local optimal solution. The empirical study of both variants shows that the dynamic self-adaptive heterogeneous structure is able to effectively address the exploration-exploitation trade-off problem and provide excellent optimal solutions for the complex real-world problem.To conclude,the proposed biological metaphor approaches provide each of the PSO algorithms variants with different search characteristics, which makes them more suitable for different types of real-world problems.博士(理学)法政大学 (Hosei University

    MAT: Genetic Algorithms Based Multi-Objective Adversarial Attack on Multi-Task Deep Neural Networks

    Get PDF
    Vulnerability to adversarial attacks is a recognized deficiency of not only deep neural networks (DNNs) but also multi-task deep neural networks (MT-DNNs) that attracted much attention in the past few years. To the best of our knowledge, all multi-task deep neural network adversarial attacks currently present in the literature are non-targeted attacks that use gradient descent to optimize a single loss function generated by aggregating all loss functions into one. On the contrary, targeted attacks are sometimes preferred since they give more control over the attack. Hence, this paper proposes a novel targeted multi-objective adversarial ATtack (MAT) based on genetic algorithms (GA)s that can create an adversarial image capable of affecting only targeted loss functions of the MT-DNN system. MAT is trained on the Taskonomy dataset using a novel training algorithm GAMAT that consists of five specific stages. The superiority of the proposed attack is demonstrated in terms of the fitness-distance metric --Abstract, p. i

    A Brain Storm Optimization with Multiinformation Interactions for Global Optimization Problems

    Get PDF
    The original BSO fails to consider some potential information interactions in its individual update pattern, causing the premature convergence for complex problems. To address this problem, we propose a BSO algorithm with multi-information interactions (MIIBSO). First, a multi-information interaction (MII) strategy is developed, thoroughly considering various information interactions among individuals. Specially, this strategy contains three new MII patterns. The first two patterns aim to reinforce information interaction capability between individuals. The third pattern provides interactions between the corresponding dimensions of different individuals. The collaboration of the above three patterns is established by an individual stagnation feedback (ISF) mechanism, contributing to preserve the diversity of the population and enhance the global search capability for MIIBSO. Second, a random grouping (RG) strategy is introduced to replace both the K-means algorithm and cluster center disruption of the original BSO algorithm, further enhancing the information interaction capability and reducing the computational cost of MIIBSO. Finally, a dynamic difference step-size (DDS), which can offer individual feedback information and improve search range, is designed to achieve an effective balance between global and local search capability for MIIBSO. By combining the MII strategy, RG, and DDS, MIIBSO achieves the effective improvement in the global search ability, convergence speed, and computational cost. MIIBSO is compared with 11 BSO algorithms and five other algorithms on the CEC2013 test suit. The results confirm that MIIBSO obtains the best global search capability and convergence speed amongst the 17 algorithms
    corecore