77 research outputs found

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    An Approach Based on Particle Swarm Optimization for Inspection of Spacecraft Hulls by a Swarm of Miniaturized Robots

    Get PDF
    The remoteness and hazards that are inherent to the operating environments of space infrastructures promote their need for automated robotic inspection. In particular, micrometeoroid and orbital debris impact and structural fatigue are common sources of damage to spacecraft hulls. Vibration sensing has been used to detect structural damage in spacecraft hulls as well as in structural health monitoring practices in industry by deploying static sensors. In this paper, we propose using a swarm of miniaturized vibration-sensing mobile robots realizing a network of mobile sensors. We present a distributed inspection algorithm based on the bio-inspired particle swarm optimization and evolutionary algorithm niching techniques to deliver the task of enumeration and localization of an a priori unknown number of vibration sources on a simplified 2.5D spacecraft surface. Our algorithm is deployed on a swarm of simulated cm-scale wheeled robots. These are guided in their inspection task by sensing vibrations arising from failure points on the surface which are detected by on-board accelerometers. We study three performance metrics: (1) proximity of the localized sources to the ground truth locations, (2) time to localize each source, and (3) time to finish the inspection task given a 75% inspection coverage threshold. We find that our swarm is able to successfully localize the present so

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space

    Nature-inspired algorithms for solving some hard numerical problems

    Get PDF
    Optimisation is a branch of mathematics that was developed to find the optimal solutions, among all the possible ones, for a given problem. Applications of optimisation techniques are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of methods to solve specific problems to its optimality. This dissertation focuses on the adaptation of two nature inspired algorithms that, based on optimisation techniques, are able to compute approximations for zeros of polynomials and roots of non-linear equations and systems of non-linear equations. Although many iterative methods for finding all the roots of a given function already exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results due to the problem of accumulating rounding errors, (b) good initial approximations to the roots for the algorithm converge, or (c) the computation of first or second order derivatives, which besides being computationally intensive, it is not always possible. The drawbacks previously mentioned served as motivation for the use of Particle Swarm Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are known, respectively, for their ability to explore high-dimensional spaces (not requiring good initial approximations) and for their capability to model complex problems. Besides that, both methods do not need repeated deflations, nor derivative information. The algorithms were described throughout this document and tested using a test suite of hard numerical problems in science and engineering. Results, in turn, were compared with several results available on the literature and with the well-known Durand–Kerner method, depicting that both algorithms are effective to solve the numerical problems considered.A Optimização é um ramo da matemática desenvolvido para encontrar as soluções óptimas, de entre todas as possíveis, para um determinado problema. Actualmente, são várias as técnicas de optimização aplicadas a problemas de engenharia, de informática e da indústria. Dada a grande panóplia de aplicações, existem inúmeros trabalhos publicados que propõem métodos para resolver, de forma óptima, problemas específicos. Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que, tendo como base técnicas de optimização, são capazes de calcular aproximações para zeros de polinómios e raízes de equações não lineares e sistemas de equações não lineares. Embora já existam muitos métodos iterativos para encontrar todas as raízes ou zeros de uma função, eles usualmente exigem: (a) deflações repetidas, que podem levar a resultados muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada iteração; (b) boas aproximações iniciais para as raízes para o algoritmo convergir, ou (c) o cálculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente intensivo, para muitas funções é impossível de se calcular. Estas desvantagens motivaram o uso da Optimização por Enxame de Partículas (PSO) e de Redes Neurais Artificiais (RNAs) para o cálculo de raízes. Estas técnicas são conhecidas, respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo boas aproximações iniciais) e pela sua capacidade de modelar problemas complexos. Além disto, tais técnicas não necessitam de deflações repetidas, nem do cálculo de derivadas. Ao longo deste documento, os algoritmos são descritos e testados, usando um conjunto de problemas numéricos com aplicações nas ciências e na engenharia. Os resultados foram comparados com outros disponíveis na literatura e com o método de Durand–Kerner, e sugerem que ambos os algoritmos são capazes de resolver os problemas numéricos considerados

    Information Exchange and Conflict Resolution in Particle Swarm Optimization Variants

    Get PDF
    Single population, biologically-inspired algorithms such as Genetic Algorithm and Particle Swarm Optimization are effective tools for solving a variety of optimization problems. Like many such algorithms, however, they fall victim to the curse of dimensionality. Additionally, these algorithms often suffer from a phenomenon known as hitchhiking where improved solutions are not unequivocally better for all variables. Insofar as individuals within these populations are deemed to be competitive, one solution to both the curse of dimensionality and the problem of hitchhiking has been to introduce more cooperation. These multi-population algorithms cooperate by decomposing a problem into parts and assigning a population to each part. Factored Evolutionary Algorithms (FEA) generalize this decomposition and cooperation to any evolutionary algorithm. A key element of FEA is a global solution that provides missing information to individual populations and coordinates them. This dissertation extends FEA to the distributed case by having individual populations maintain and coordinate local solutions that maintain consensus. This Distributed FEA (DFEA) is demonstrated to perform well on a variety of problems and, sometimes, even if consensus is lost. However, DFEA fails to maintain the same semantics as FEA. To address this issue, we develop an alternative framework to the ``cooperation versus competition'' dichotomy. In this framework, information flows are modeled as a blackboard architecture. Changes in the blackboard are modeled as merge operations that require conflict resolution between existing and candidate values. Conflict resolution is handled using Pareto efficiency, which avoids hitchhiking. We apply this framework to FEA and DFEA and develop revised DFEA, which performs identically to FEA. We then apply our framework to a single population algorithm, Particle Swarm Optimization (PSO), to create Pareto Improving PSO (PI-PSO). We demonstrate that PI-PSO outperforms PSO and sometimes FEA-PSO, often with fewer individuals. Finally, we extend our information based approach by implementing parallel, distributed versions of FEA and DFEA using the Actor model. The Actor model is based on message passing, which accords well with our information-centric framework. We use validation experiments to verify that we have successfully implemented the semantics of the serial versions of FEA and DFEA

    SLIS Student Research Journal, Vol.7, Iss.1

    Get PDF

    Adaptive Heterogeneous Multi-Population Cultural Algorithm

    Get PDF
    Optimization problems is a class of problems where the goal is to make a system as effective as possible. The goal of this research area is to design an algorithm to solve optimization problems effectively and efficiently. Being effective means that the algorithm should be able to find the optimal solution (or near optimal solutions), while efficiency refers to the computational effort required by the algorithm to find an optimal solution. In other words, an optimization algorithm should be able to find the optimal solution in an acceptable time. Therefore, the aim of this dissertation is to come up with a new algorithm which presents an effective as well as efficient performance. There are various kinds of algorithms proposed to deal with optimization problems. Evolutionary Algorithms (EAs) is a subset of population-based methods which are successfully applied to solve optimization problems. In this dissertation the area of evolutionary methods and specially Cultural Algorithms (CAs) are investigated. The results of this investigation reveal that there are some room for improving the existing EAs. Consequently, a number of EAs are proposed to deal with different optimization problems. The proposed EAs offer better performance compared to the state-of-the-art methods. The main contribution of this dissertation is to introduce a new architecture for optimization algorithms which is called Heterogeneous Multi-Population Cultural Algorithm (HMP-CA). The new architecture first incorporates a decomposition technique to divide the given problem into a number of sub-problems, and then it assigns the sub-problems to different local CAs to be optimized separately in parallel. In order to evaluate the proposed architecture, it is applied on numerical optimization problems. The evaluation results reveal that HMP-CA is fully effective such that it can find the optimal solution for every single run. Furthermore, HMP-CA outperforms the state-of-the-art methods by offering a more efficient performance. The proposed HMP-CA is further improved by incorporating an adaptive decomposition technique. The improved version which is called Adaptive HMP-CA (A-HMP-CA) is evaluated over large scale global optimization problems. The results of this evaluation show that HMP-CA significantly outperforms the state-of-the-art methods in terms of both effectiveness and efficiency

    Real time tracking using nature-inspired algorithms

    Get PDF
    This thesis investigates the core difficulties in the tracking field of computer vision. The aim is to develop a suitable tuning free optimisation strategy so that a real time tracking could be achieved. The population and multi-solution based approaches have been applied first to analyse the convergence behaviours in the evolutionary test cases. The aim is to identify the core misconceptions in the manner the search characteristics of particles are defined in the literature. A general perception in the scientific community is that the particle based methods are not suitable for the real time applications. This thesis improves the convergence properties of particles by a novel scale free correlation approach. By altering the fundamental definition of a particle and by avoiding the nostalgic operations the tracking was expedited to a rate of 250 FPS. There is a reasonable amount of similarity between the tracking landscapes and the ones generated by three dimensional evolutionary test cases. Several experimental studies are conducted that compares the performances of the novel optimisation to the ones observed with the swarming methods. It is therefore concluded that the modified particle behaviour outclassed the traditional approaches by huge margins in almost every test scenario
    corecore