4 research outputs found

    Swarm Robotics

    Get PDF
    This study analyzes and designs the Swarm intelligence (SI) that Self-organizing migrating algorithm (SOMA) represents to solve industrial practice as well as academic optimization problems, and applies them to swarm robotics. Specifically, the characteristics of SOMA are clarified, shaping the basis for the analysis of SOMA's strengths and weaknesses for the release of SOMA T3A, SOMA Pareto, and iSOMA, with outstanding performance, confirmed by well-known test suites from IEEE CEC 2013, 2015, 2017, and 2019. Besides, the dynamic path planning problem for swarm robotics is handled by the proposed algorithms considered as a prime instance. The computational and simulation results on Matlab have proven the performance of the novel algorithms as well as the correctness of the obstacle avoidance method for mobile robots and drones. Furthermore, two out of the three proposed versions achieved the tie for 3rd (the same ranking with HyDE-DF) and 5th place in the 100-Digit Challenge at CEC 2019, GECCO 2019, and SEMCCO 2019 competition, something that any other version of SOMA has yet to do. They show promising possibilities that SOMA and SI algorithms offer.Tato práce se zabývá analýzou a vylepšením hejnové inteligence, kterou představuje samoorganizující se migrační algoritmus s možností využití v průmyslové praxi a se zaměřením na hejnovou robotiku. Je analyzován algoritmus SOMA, identifikovány silné a slabé stránky a navrženy nové verze SOMA jako SOMA T3A, SOMA Pareto, iSOMA s vynikajícím výkonem, potvrzeným známými testovacími sadami IEEE CEC 2013, 2015, 2017 a 2019. Tyto verze jsou pak aplikovány na problém s dynamickým plánováním dráhy pro hejnovou robotiku. Výsledky výpočtů a simulace v Matlabu prokázaly výkonnost nových algoritmů a správnost metody umožňující vyhýbání se překážkám u mobilních robotů a dronů. Kromě toho dvě ze tří navržených verzí dosáhly na 3. a 5. místo v soutěži 100-Digit Challenge na CEC 2019, GECCO 2019 a SEMCCO 2019, což je potvrzení navržených inovací. Práce tak demonstruje nejen vylepšení SOMA, ale i slibné možnosti hejnové inteligence.460 - Katedra informatikyvyhově

    Treasure hunt : a framework for cooperative, distributed parallel optimization

    Get PDF
    Orientador: Prof. Dr. Daniel WeingaertnerCoorientadora: Profa. Dra. Myriam Regattieri DelgadoTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/05/2019Inclui referências: p. 18-20Área de concentração: Ciência da ComputaçãoResumo: Este trabalho propõe um framework multinível chamado Treasure Hunt, que é capaz de distribuir algoritmos de busca independentes para um grande número de nós de processamento. Com o objetivo de obter uma convergência conjunta entre os nós, este framework propõe um mecanismo de direcionamento que controla suavemente a cooperação entre múltiplas instâncias independentes do Treasure Hunt. A topologia em árvore proposta pelo Treasure Hunt garante a rápida propagação da informação pelos nós, ao mesmo tempo em que provê simutaneamente explorações (pelos nós-pai) e intensificações (pelos nós-filho), em vários níveis de granularidade, independentemente do número de nós na árvore. O Treasure Hunt tem boa tolerância à falhas e está parcialmente preparado para uma total tolerância à falhas. Como parte dos métodos desenvolvidos durante este trabalho, um método automatizado de Particionamento Iterativo foi proposto para controlar o balanceamento entre explorações e intensificações ao longo da busca. Uma Modelagem de Estabilização de Convergência para operar em modo Online também foi proposto, com o objetivo de encontrar pontos de parada com bom custo/benefício para os algoritmos de otimização que executam dentro das instâncias do Treasure Hunt. Experimentos em benchmarks clássicos, aleatórios e de competição, de vários tamanhos e complexidades, usando os algoritmos de busca PSO, DE e CCPSO2, mostram que o Treasure Hunt melhora as características inerentes destes algoritmos de busca. O Treasure Hunt faz com que os algoritmos de baixa performance se tornem comparáveis aos de boa performance, e os algoritmos de boa performance possam estender seus limites até problemas maiores. Experimentos distribuindo instâncias do Treasure Hunt, em uma rede cooperativa de até 160 processos, demonstram a escalabilidade robusta do framework, apresentando melhoras nos resultados mesmo quando o tempo de processamento é fixado (wall-clock) para todas as instâncias distribuídas do Treasure Hunt. Resultados demonstram que o mecanismo de amostragem fornecido pelo Treasure Hunt, aliado à maior cooperação entre as múltiplas populações em evolução, reduzem a necessidade de grandes populações e de algoritmos de busca complexos. Isto é especialmente importante em problemas de mundo real que possuem funções de fitness muito custosas. Palavras-chave: Inteligência artificial. Métodos de otimização. Algoritmos distribuídos. Modelagem de convergência. Alta dimensionalidade.Abstract: This work proposes a multilevel framework called Treasure Hunt, which is capable of distributing independent search algorithms to a large number of processing nodes. Aiming to obtain joint convergences between working nodes, Treasure Hunt proposes a driving mechanism that smoothly controls the cooperation between the multiple independent Treasure Hunt instances. The tree topology proposed by Treasure Hunt ensures quick propagation of information, while providing simultaneous explorations (by parents) and exploitations (by children), on several levels of granularity, regardless the number of nodes in the tree. Treasure Hunt has good fault tolerance and is partially prepared to full fault tolerance. As part of the methods developed during this work, an automated Iterative Partitioning method is proposed to control the balance between exploration and exploitation as the search progress. A Convergence Stabilization Modeling to operate in Online mode is also proposed, aiming to find good cost/benefit stopping points for the optimization algorithms running within the Treasure Hunt instances. Experiments on classic, random and competition benchmarks of various sizes and complexities, using the search algorithms PSO, DE and CCPSO2, show that Treasure Hunt boosts the inherent characteristics of these search algorithms. Treasure Hunt makes algorithms with poor performances to become comparable to good ones, and algorithms with good performances to be capable of extending their limits to larger problems. Experiments distributing Treasure Hunt instances in a cooperative network up to 160 processes show the robust scaling of the framework, presenting improved results even when fixing a wall-clock time for the instances. Results show that the sampling mechanism provided by Treasure Hunt, allied to the increased cooperation between multiple evolving populations, reduce the need for large population sizes and complex search algorithms. This is specially important on real-world problems with time-consuming fitness functions. Keywords: Artificial intelligence. Optimization methods. Distributed algorithms. Convergence modeling. High dimensionality

    Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations

    Get PDF
    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.FWN – Publicaties zonder aanstelling Universiteit Leide

    Evolutionary Algorithms and Computational Methods for Derivatives Pricing

    Get PDF
    This work aims to provide novel computational solutions to the problem of derivative pricing. To achieve this, a novel hybrid evolutionary algorithm (EA) based on particle swarm optimisation (PSO) and differential evolution (DE) is introduced and applied, along with various other state-of-the-art variants of PSO and DE, to the problem of calibrating the Heston stochastic volatility model. It is found that state-of-the-art DEs provide excellent calibration performance, and that previous use of rudimentary DEs in the literature undervalued the use of these methods. The use of neural networks with EAs for approximating the solution to derivatives pricing models is next investigated. A set of neural networks are trained from Monte Carlo (MC) simulation data to approximate the closed form solution for European, Asian and American style options. The results are comparable to MC pricing, but with offline evaluation of the price using the neural networks being orders of magnitudes faster and computationally more efficient. Finally, the use of custom hardware for numerical pricing of derivatives is introduced. The solver presented here provides an energy efficient data-flow implementation for pricing derivatives, which has the potential to be incorporated into larger high-speed/low energy trading systems
    corecore