6 research outputs found

    An enhanced particle swarm optimization algorithm

    Get PDF
    In this paper, an enhanced stochastic optimization algorithm based on the basic Particle Swarm Optimization (PSO) algorithm is proposed. The basic PSO algorithm is built on the activities of the social feeding of some animals. Its parameters may influence the solution considerably. Moreover, it has a couple of weaknesses, for example, convergence speed and premature convergence. As a way out of the shortcomings of the basic PSO, several enhanced methods for updating the velocity such as Exponential Decay Inertia Weight (EDIW) are proposed in this work to construct an Enhanced PSO (EPSO) algorithm. The suggested algorithm is numerically simulated established on five benchmark functions with regards to the basic PSO approaches. The performance of the EPSO algorithm is analyzed and discussed based on the test results

    Optimización No Lineal Basada en “Enjambre de Partículas”

    Get PDF
    La optimización basada en “enjambre de partículas” (particle swarm optimization) es una estrategia de programación estocástica que ha cobrado gran popularidad en los últimos años debido a sus buenas propiedades de convergencia a óptimos globales y sencillez de implementación. Sin embargo, al igual que otras técnicas de naturaleza evolutiva, su mayor debilidad radica en el tratamiento de las restricciones. En esta contribución se propone una metodología para manipular en forma eficiente las restricciones del problema de optimización que se encuentran habitualmente en problemas típicos de la ingeniería de procesos.Sociedad Argentina de Informática e Investigación Operativ

    A new approach to particle swarm optimization algorithm

    Get PDF
    Particularly interesting group consists of algorithms that implement co-evolution or co-operation in natural environments, giving much more powerful implementations. The main aim is to obtain the algorithm which operation is not influenced by the environment. An unusual look at optimization algorithms made it possible to develop a new algorithm and its metaphors define for two groups of algorithms. These studies concern the particle swarm optimization algorithm as a model of predator and prey. New properties of the algorithm resulting from the co-operation mechanism that determines the operation of algorithm and significantly reduces environmental influence have been shown. Definitions of functions of behavior scenarios give new feature of the algorithm. This feature allows self controlling the optimization process. This approach can be successfully used in computer games. Properties of the new algorithm make it worth of interest, practical application and further research on its development. This study can be also an inspiration to search other solutions that implementing co-operation or co-evolution.Angeline, P. (1998). Using selection to improve particle swarm optimization. In Proceedings of the IEEE congress on evolutionary computation, Anchorage (pp. 84–89).Arquilla, J., & Ronfeldt, D. (2000). Swarming and the future of conflict, RAND National Defense Research Institute, Santa Monica, CA, US.Bessaou, M., & Siarry, P. (2001). A genetic algorithm with real-value coding to optimize multimodal continuous functions. Structural and Multidiscipline Optimization, 23, 63–74.Bird, S., & Li, X. (2006). Adaptively choosing niching parameters in a PSO. In Proceedings of the 2006 genetic and evolutionary computation conference (pp. 3–10).Bird, S., & Li, X. (2007). Using regression to improve local convergence. In Proceedings of the 2007 IEEE congress on evolutionary computation (pp. 592–599).Blackwell, T., & Bentley, P. (2002). Dont push me! Collision-avoiding swarms. In Proceedings of the IEEE congress on evolutionary computation, Honolulu (pp. 1691–1696).Brits, R., Engelbrecht, F., & van den Bergh, A. P. (2002). Solving systems of unconstrained equations using particle swarm optimization. In Proceedings of the 2002 IEEE conference on systems, man, and cybernetics (pp. 102–107).Brits, R., Engelbrecht, A., & van den Bergh, F. (2002). A niching particle swarm optimizer. In Proceedings of the fourth asia-pacific conference on simulated evolution and learning (pp. 692–696).Chelouah, R., & Siarry, P. (2000). A continuous genetic algorithm designed for the global optimization of multimodal functions. Journal of Heuristics, 6(2), 191–213.Chelouah, R., & Siarry, P. (2000). Tabu search applied to global optimization. European Journal of Operational Research, 123, 256–270.Chelouah, R., & Siarry, P. (2003). Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multiminima function. European Journal of Operational Research, 148(2), 335–348.Chelouah, R., & Siarry, P. (2005). A hybrid method combining continuous taboo search and Nelder–Mead simplex algorithms for the global optimization of multiminima functions. European Journal of Operational Research, 161, 636–654.Chen, T., & Chi, T. (2010). On the improvements of the particle swarm optimization algorithm. Advances in Engineering Software, 41(2), 229–239.Clerc, M., & Kennedy, J. (2002). The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73.Fan, H., & Shi, Y. (2001). Study on Vmax of particle swarm optimization. In Proceedings of the workshop particle swarm optimization, Indianapolis.Gao, H., & Xu, W. (2011). Particle swarm algorithm with hybrid mutation strategy. Applied Soft Computing, 11(8), 5129–5142.Gosciniak, I. (2008). Immune algorithm in non-stationary optimization task. In Proceedings of the 2008 international conference on computational intelligence for modelling control & automation, CIMCA ’08 (pp. 750–755). Washington, DC, USA: IEEE Computer Society.He, Q., & Wang, L. (2007). An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Engineering Applications of Artificial Intelligence, 20(1), 89–99.Higashitani, M., Ishigame, A., & Yasuda, K., (2006). Particle swarm optimization considering the concept of predator–prey behavior. In 2006 IEEE congress on evolutionary computation (pp. 434–437).Higashitani, M., Ishigame, A., & Yasuda, K. (2008). Pursuit-escape particle swarm optimization. IEEJ Transactions on Electrical and Electronic Engineering, 3(1), 136–142.Hu, X., & Eberhart, R. (2002). Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Proceedings of the evolutionary computation on 2002. CEC ’02. Proceedings of the 2002 congress (Vol. 02, pp. 1677–1681). Washington, DC, USA: IEEE Computer Society.Hu, X., Eberhart, R., & Shi, Y. (2003). Engineering optimization with particle swarm. In IEEE swarm intelligence symposium, SIS 2003 (pp. 53–57). Indianapolis: IEEE Neural Networks Society.Jang, W., Kang, H., Lee, B., Kim, K., Shin, D., & Kim, S. (2007). Optimized fuzzy clustering by predator prey particle swarm optimization. In IEEE congress on evolutionary computation, CEC2007 (pp. 3232–3238).Kennedy, J. (2000). Stereotyping: Improving particle swarm performance with cluster analysis. In Proceedings of the 2000 congress on evolutionary computation (pp. 1507–1512).Kennedy, J., & Mendes, R. (2002). Population structure and particle swarm performance. In IEEE congress on evolutionary computation (pp. 1671–1676).Kuo, H., Chang, J., & Shyu, K. (2004). A hybrid algorithm of evolution and simplex methods applied to global optimization. Journal of Marine Science and Technology, 12(4), 280–289.Leontitsis, A., Kontogiorgos, D., & Pange, J. (2006). Repel the swarm to the optimum. Applied Mathematics and Computation, 173(1), 265–272.Li, X. (2004). Adaptively choosing neighborhood bests using species in a particle swarm optimizer for multimodal function optimization. In Proceedings of the 2004 genetic and evolutionary computation conference (pp. 105–116).Li, C., & Yang, S. (2009). A clustering particle swarm optimizer for dynamic optimization. In Proceedings of the 2009 congress on evolutionary computation (pp. 439–446).Liang, J., Suganthan, P., & Deb, K. (2005). Novel composition test functions for numerical global optimization. In Proceedings of the swarm intelligence symposium [Online]. Available: .Liang, J., Qin, A., Suganthan, P., & Baskar, S. (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation, 10(3), 281–295.Lovbjerg, M., & Krink, T. (2002). Extending particle swarm optimizers with self-organized criticality. In Proceedings of the congress on evolutionary computation, Honolulu (pp. 1588–1593).Lung, R., & Dumitrescu, D. (2007). A collaborative model for tracking optima in dynamic environments. In Proceedings of the 2007 congress on evolutionary computation (pp. 564–567).Mendes, R., Kennedy, J., & Neves, J. (2004). The fully informed particle swarm: simpler, maybe better. IEEE Transaction on Evolutionary Computation, 8(3), 204–210.Miranda, V., & Fonseca, N. (2002). New evolutionary particle swarm algorithm (EPSO) applied to voltage/VAR control. In Proceedings of the 14th power systems computation conference, Seville, Spain [Online] Available: .Parrott, D., & Li, X. (2004). A particle swarm model for tracking multiple peaks in a dynamic environment using speciation. In Proceedings of the 2004 congress on evolutionary computation (pp. 98–103).Parrott, D., & Li, X. (2006). Locating and tracking multiple dynamic optima by a particle swarm model using speciation. In IEEE transaction on evolutionary computation (Vol. 10, pp. 440–458).Parsopoulos, K., & Vrahatis, M. (2004). UPSOA unified particle swarm optimization scheme. Lecture Series on Computational Sciences, 868–873.Passaroand, A., & Starita, A. (2008). Particle swarm optimization for multimodal functions: A clustering approach. Journal of Artificial Evolution and Applications, 2008, 15 (Article ID 482032).Peram, T., Veeramachaneni, K., & Mohan, C. (2003). Fitness-distance-ratio based particle swarm optimization. In Swarm intelligence symp. (pp. 174–181).Sedighizadeh, D., & Masehian, E. (2009). Particle swarm optimization methods, taxonomy and applications. International Journal of Computer Theory and Engineering, 1(5), 1793–8201.Shi, Y., & Eberhart, R. (2001). Particle swarm optimization with fuzzy adaptive inertia weight. In Proceedings of the workshop particle swarm optimization, Indianapolis (pp. 101–106).Shi, Y., & Eberhart, R. (1998). A modified particle swarm optimizer. In Proceedings of IEEE International Conference on Evolutionary Computation (pp. 69–73). Washington, DC, USA: IEEE Computer Society.Thomsen, R. (2004). Multimodal optimization using crowding-based differential evolution. In Proceedings of the 2004 congress on evolutionary computation (pp. 1382–1389).Trojanowski, K., & Wierzchoń, S. (2009). Immune-based algorithms for dynamic optimization. Information Sciences, 179(10), 1495–1515.Tsoulos, I., & Stavrakoudis, A. (2010). Enhancing PSO methods for global optimization. Applied Mathematics and Computation, 216(10), 2988–3001.van den Bergh, F., & Engelbrecht, A. (2004). A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8, 225–239.Wolpert, D., & Macready, W. (1997). No free lunch theorems for optimization. IEEE Transaction on Evolutionary Computation, 1(1), 67–82.Xie, X., Zhang, W., & Yang, Z. (2002). Dissipative particle swarm optimization. In Proceedings of the congress on evolutionary computation (pp. 1456–1461).Yang, S., & Li, C. (2010). A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments. In IEEE Trans. on evolutionary computation (Vol. 14, pp. 959–974).Kuo, H., Chang, J., & Liu, C. (2006). Particle swarm optimization for global optimization problems. Journal of Marine Science and Technology, 14(3), 170–181

    Particle swarm optimization using dimension selection methods

    Get PDF
    a b s t r a c t Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. Being a stochastic algorithm, PSO and its randomness present formidable challenge for the theoretical analysis of it, and few of the existing PSO improvements have make an effort to eliminate the random coefficients in the PSO updating formula. This paper analyzes the importance of the randomness in the PSO, and then gives a PSO variant without randomness to show that traditional PSO cannot work without randomness. Based on our analysis of the randomness, another way of using randomness is proposed in PSO with random dimension selection (PSORDS) algorithm, which utilizes random dimension selection instead of stochastic coefficients. Finally, deterministic methods to do the dimension selection are proposed, and the resultant PSO with distance based dimension selection (PSODDS) algorithm is greatly superior to the traditional PSO and PSO with heuristic dimension selection (PSOHDS) algorithm is comparable to traditional PSO algorithm. In addition, using our dimension selection method to a newly proposed modified particle swarm optimization (MPSO) algorithm also gets improved results. The experiment results demonstrate that our analysis about the randomness is correct and the usage of deterministic dimension selection method is very helpful

    Otimização robusta multiobjetivo por análise de intervalo não probabilística : uma aplicação em conforto e segurança veicular sob dinâmica lateral e vertical acoplada

    Get PDF
    Esta Tese propõe uma nova ferramenta para Otimização Robusta Multiobjetivo por Análise de Intervalo Não Probabilística (Non-probabilistic Interval Analysis for Multiobjective Robust Design Optimization ou NPIA-MORDO). A ferramenta desenvolvida visa à otimização dos parâmetros concentrados de suspensão em um modelo veicular completo, submetido a uma manobra direcional percorrendo diferentes perfis de pista, a fim de garantir maior conforto e segurança ao motorista. O modelo multicorpo possui 15 graus de liberdade (15-GDL), dentre os quais onze pertencem ao veículo e assento, e quatro, ao modelo biodinâmico do motorista. A função multiobjetivo é composta por objetivos conflitantes e as suas tolerâncias, como a raiz do valor quadrático médio (root mean square ou RMS) da aceleração lateral e da aceleração vertical do assento do motorista, desenvolvidas durante a manobra de dupla troca de faixa (Double Lane Change ou DLC). O curso da suspensão e a aderência dos pneus à pista são tratados como restrições do problema de otimização. As incertezas são quantificadas no comportamento do sistema pela análise de intervalo não probabilística, por intermédio do Método dos Níveis de Corte-α (α-Cut Levels) para o nível α zero (de maior dispersão), e realizada concomitantemente ao processo de otimização multiobjetivo. Essas incertezas são aplicáveis tanto nos parâmetros do problema quanto nas variáveis de projeto. Para fins de validação do modelo, desenvolvido em ambiente MATLAB®, a trajetória do centro de gravidade da carroceria durante a manobra é comparada com o software CARSIM®, assim como as forças laterais e verticais dos pneus. Os resultados obtidos são exibidos em diversos gráficos a partir da fronteira de Pareto entre os múltiplos objetivos do modelo avaliado Os indivíduos da fronteira de Pareto satisfazem as condições do problema, e a função multiobjetivo obtida pela agregação dos múltiplos objetivos resulta em uma diferença de 1,66% entre os indivíduos com o menor e o maior valor agregado obtido. A partir das variáveis de projeto do melhor indivíduo da fronteira, gráficos são gerados para cada grau de liberdade do modelo, ilustrando o histórico dos deslocamentos, velocidades e acelerações. Para esse caso, a aceleração RMS vertical no assento do motorista é de 1,041 m/s² e a sua tolerância é de 0,631 m/s². Já a aceleração RMS lateral no assento do motorista é de 1,908 m/s² e a sua tolerância é de 0,168 m/s². Os resultados obtidos pelo NPIA-MORDO confirmam que é possível agregar as incertezas dos parâmetros e das variáveis de projeto à medida que se realiza a otimização externa, evitando a necessidade de análises posteriores de propagação de incertezas. A análise de intervalo não probabilística empregada pela ferramenta é uma alternativa viável de medida de dispersão se comparada com o desvio padrão, por não utilizar uma função de distribuição de probabilidades prévia e por aproximar-se da realidade na indústria automotiva, onde as tolerâncias são preferencialmente utilizadas.This thesis proposes the development of a new tool for Non-probabilistic Interval Analysis for Multi-objective Robust Design Optimization (NPIA-MORDO). The developed tool aims at optimizing the lumped parameters of suspension in a full vehicle model, subjected to a double-lane change (DLC) maneuver throughout different random road profiles, to ensure comfort and safety to the driver. The multi-body model has 15 degrees of freedom (15-DOF) where 11-DOF represents the vehicle and its seat and 4-DOF represents the driver's biodynamic model. A multi-objective function is composed by conflicted objectives and their tolerances, like the root mean square (RMS) lateral and vertical acceleration in the driver’s seat, both generated during the double-lane change maneuver. The suspension working space and the road holding capacity are used as constraints for the optimization problem. On the other hand, the uncertainties in the system are quantified using a non-probabilistic interval analysis with the α-Cut Levels Method for zero α-level (the most uncertainty one), performed concurrently in the multi-objective optimization process. These uncertainties are both applied to the system parameters and design variables to ensure the robustness in results. For purposes of validation in the model, developed in MATLAB®, the path of the car’s body center of gravity during the maneuver is compared with the commercial software CARSIM®, as well as the lateral and vertical forces from the tires. The results are showed in many graphics obtained from the Pareto front between the multiple conflicting objectives of the evaluated model. The obtained solutions from the Pareto Front satisfy the conditions of the evaluated problem, and the aggregated multi-objective function results in a difference of 1.66% for the worst to the best solution. From the design variables of the best solution choose from the Pareto front, graphics are created for each degree of freedom, showing the time histories for displacements, velocities and accelerations. In this particular case, the RMS vertical acceleration in the driver’s seat is 1.041 m/s² and its tolerance is 0.631 m/s², but the RMS lateral acceleration in the driver’s seat is 1.908 m/s² and its tolerance is 0.168 m/s². The overall results obtained from NPIA-MORDO assure that is possible take into account the uncertainties from the system parameters and design variables as the external optimization loop is performed, reducing the efforts in subsequent evaluations. The non-probabilistic interval analysis performed by the proposed tool is a feasible choice to evaluate the uncertainty if compared to the standard deviation, because there is no need of previous well-known based probability distribution and because it reaches the practical needs from the automotive industry, where the tolerances are preferable

    On the improvements of the particle swarm optimization algorithm

    No full text
    Since a particle swarm optimization (PSO) algorithm uses a coordinated search to find the optimum solution, it has a better chance of finding the global solution. Despite this advantage, it is also observed that some parameters used in PSO may affect the solution significantly. Following this observation, this research tries to tune some of the parameters and to add mechanisms to the PSO algorithm in order to improve its robustness in finding the global solution. The main approaches include using uniform design to ensure uniform distribution of the initial particles in the design space, adding a mutation operation to increase the diversity of particles, decreasing the maximum velocity limitation and the velocity inertia automatically to balance the local and the global search efforts, reducing velocity when constraints are violated, and using Gaussian distribution based local searches to escape local minima. Besides these efforts, an algorithm is also developed to find multiple solutions in a single run. The results show that the overall effect of these approaches can yield better results for most test problems. (C) 2009 Elsevier Ltd. All rights reserved
    corecore