112 research outputs found
EvoX: A Distributed GPU-accelerated Library towards Scalable Evolutionary Computation
During the past decades, evolutionary computation (EC) has demonstrated
promising potential in solving various complex optimization problems of
relatively small scales. Nowadays, however, ongoing developments in modern
science and engineering are bringing increasingly grave challenges to the
conventional EC paradigm in terms of scalability. As problem scales increase,
on the one hand, the encoding spaces (i.e., dimensions of the decision vectors)
are intrinsically larger; on the other hand, EC algorithms often require
growing numbers of function evaluations (and probably larger population sizes
as well) to work properly. To meet such emerging challenges, not only does it
require delicate algorithm designs, but more importantly, a high-performance
computing framework is indispensable. Hence, we develop a distributed
GPU-accelerated algorithm library -- EvoX. First, we propose a generalized
workflow for implementing general EC algorithms. Second, we design a scalable
computing framework for running EC algorithms on distributed GPU devices.
Third, we provide user-friendly interfaces to both researchers and
practitioners for benchmark studies as well as extended real-world
applications. To comprehensively assess the performance of EvoX, we conduct a
series of experiments, including: (i) scalability test via numerical
optimization benchmarks with problem dimensions/population sizes up to
millions; (ii) acceleration test via a neuroevolution task with multiple GPU
nodes; (iii) extensibility demonstration via the application to reinforcement
learning tasks on the OpenAI Gym. The code of EvoX is available at
https://github.com/EMI-Group/EvoX
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms
GPU parallelization strategies for metaheuristics: a survey
Metaheuristics have been showing interesting results in solving hard optimization problems. However, they become limited in terms of effectiveness and runtime for high dimensional problems. Thanks to the independency of metaheuristics components, parallel computing appears as an attractive choice to reduce the execution time and to improve solution quality. By exploiting the increasing performance and programability of graphics processing units (GPUs) to this aim, GPU-based parallel metaheuristics have been implemented using different designs. RecentresultsinthisareashowthatGPUstendtobeeffectiveco-processors forleveraging complex optimization problems.In thissurvey, mechanisms involvedinGPUprogrammingforimplementingparallelmetaheuristicsare presentedanddiscussedthroughastudyofrelevantresearchpapers.
Metaheuristics can obtain satisfying results when solving optimization problems in a reasonable time. However, they suffer from the lack of scalability. Metaheuristics become limited ahead complex highdimensional optimization problems. To overcome this limitation, GPU based parallel computing appears as a strong alternative. Thanks to GPUs, parallelmetaheuristicsachievedbetterresultsintermsofcomputation,and evensolutionquality
Machine learning into metaheuristics: A survey and taxonomy of data-driven metaheuristics
During the last years, research in applying machine learning (ML) to design efficient, effective and robust metaheuristics became increasingly popular. Many of those data driven metaheuristics have generated high quality results and represent state-of-the-art optimization algorithms. Although various appproaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this research topic. In this paper we will investigate different opportunities for using ML into metaheuristics. We define uniformly the various ways synergies which might be achieved. A detailed taxonomy is proposed according to the concerned search component: target optimization problem, low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We identify some open research issues in this topic which needs further in-depth investigations
Global identification of electrical and mechanical parameters in PMSM drive based on dynamic self-learning PSO
A global parameter estimation method for a PMSM drive system is proposed, where the electrical parameters, mechanical parameters and voltage-source-inverter (VSI) nonlinearity are regarded as a whole and parameter estimation is formulated as a single parameter optimization model. A dynamic learning estimator is proposed for tracking the electrical parameters, mechanical parameters and VSI of PMSM drive by using dynamic self learning particle swarm optimization (DSLPSO). In DSLPSO, a novel movement modification equation with dynamic exemplar learning strategy is designed to ensure its diversity and achieve a reasonable tradeoff between the exploitation and exploration during the search process. Moreover, a nonlinear multi-scale based interactive learning operator is introduced for accelerating the convergence speed of the Pbest particles; meanwhile a dynamic opposition-based learning (OBL) strategy is designed to facilitate the gBest particle to explore a potentially better region. The proposed algorithm is applied to parameter estimation for a PMSM drive system. The results show that the proposed method has better performance in tracking the variation of electrical parameters, and estimating the immeasurable mechanical parameters and the VSI disturbance voltage simultaneously
Nature-inspired algorithms for solving some hard numerical problems
Optimisation is a branch of mathematics that was developed to find the optimal solutions,
among all the possible ones, for a given problem. Applications of optimisation techniques
are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of
methods to solve specific problems to its optimality.
This dissertation focuses on the adaptation of two nature inspired algorithms that, based
on optimisation techniques, are able to compute approximations for zeros of polynomials
and roots of non-linear equations and systems of non-linear equations.
Although many iterative methods for finding all the roots of a given function already
exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results
due to the problem of accumulating rounding errors, (b) good initial approximations to the
roots for the algorithm converge, or (c) the computation of first or second order derivatives,
which besides being computationally intensive, it is not always possible.
The drawbacks previously mentioned served as motivation for the use of Particle Swarm
Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are
known, respectively, for their ability to explore high-dimensional spaces (not requiring good
initial approximations) and for their capability to model complex problems. Besides that,
both methods do not need repeated deflations, nor derivative information.
The algorithms were described throughout this document and tested using a test suite of
hard numerical problems in science and engineering. Results, in turn, were compared with
several results available on the literature and with the well-known Durand–Kerner method,
depicting that both algorithms are effective to solve the numerical problems considered.A Optimização é um ramo da matemática desenvolvido para encontrar as soluções óptimas, de entre todas as possíveis, para um determinado problema. Actualmente, são várias as
técnicas de optimização aplicadas a problemas de engenharia, de informática e da indústria.
Dada a grande panóplia de aplicações, existem inúmeros trabalhos publicados que propõem
métodos para resolver, de forma óptima, problemas específicos.
Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que,
tendo como base técnicas de optimização, são capazes de calcular aproximações para zeros
de polinómios e raízes de equações não lineares e sistemas de equações não lineares.
Embora já existam muitos métodos iterativos para encontrar todas as raízes ou zeros de
uma função, eles usualmente exigem: (a) deflações repetidas, que podem levar a resultados
muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada
iteração; (b) boas aproximações iniciais para as raízes para o algoritmo convergir, ou (c) o
cálculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente
intensivo, para muitas funções é impossível de se calcular.
Estas desvantagens motivaram o uso da Optimização por Enxame de Partículas (PSO) e
de Redes Neurais Artificiais (RNAs) para o cálculo de raízes. Estas técnicas são conhecidas,
respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo
boas aproximações iniciais) e pela sua capacidade de modelar problemas complexos. Além
disto, tais técnicas não necessitam de deflações repetidas, nem do cálculo de derivadas.
Ao longo deste documento, os algoritmos são descritos e testados, usando um conjunto de
problemas numéricos com aplicações nas ciências e na engenharia. Os resultados foram comparados com outros disponíveis na literatura e com o método de Durand–Kerner, e sugerem
que ambos os algoritmos são capazes de resolver os problemas numéricos considerados
Applied (Meta)-Heuristic in Intelligent Systems
Engineering and business problems are becoming increasingly difficult to solve due to the new economics triggered by big data, artificial intelligence, and the internet of things. Exact algorithms and heuristics are insufficient for solving such large and unstructured problems; instead, metaheuristic algorithms have emerged as the prevailing methods. A generic metaheuristic framework guides the course of search trajectories beyond local optimality, thus overcoming the limitations of traditional computation methods. The application of modern metaheuristics ranges from unmanned aerial and ground surface vehicles, unmanned factories, resource-constrained production, and humanoids to green logistics, renewable energy, circular economy, agricultural technology, environmental protection, finance technology, and the entertainment industry. This Special Issue presents high-quality papers proposing modern metaheuristics in intelligent systems
Meta-heurística Inspirada na Bioluminescência dos Vaga-lumes usando Aprendizagem Baseada em Oposição Elite
A estagnação em ótimo local é um problema frequente dos métodos meta-heurísticos, incluindo os inspirados na natureza como o Algoritmo do Vaga-lume (FA). Embora várias abordagens tenham sido propostas, o problema continua uma questão em aberto. Este trabalho apresenta uma variante do FA que utiliza Aprendizagem Baseada em Oposição Elite (EOBL), denominada FA–EOBL. A variante objetiva gerar diversidade e aumentar a velocidade de convergência do FA original. Diversos experimentos foram realizados com 12 funções de referência. Em geral, elas são usadas para validar e comparar novos algoritmos de otimização. Os resultados mostram a superioridade da FA–EOBL quando ela é comparada com o FA original
- …