464 research outputs found

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Comparative Analysis Multi-Robot Formation Control Modeling Using Fuzzy Logic Type 2 – Particle Swarm Optimization

    Get PDF
    Multi-robot is a robotic system consisting of several robots that are interconnected and can communicate and collaborate with each other to complete a goal. With physical similarities, they have two controlled wheels and one free wheel that moves at the same speed. In this Problem, there is a main problem remaining in controlling the movement of the multi robot formation in searching the target. It occurs because the robots have to create dynamic geometric shapes towards the target. In its movement, it requires a control system in order to move the position as desired. For multi-robot movement formations, they have their own predetermined trajectories which are relatively constant in varying speeds and accelerations even in sudden stops. Based on these weaknesses, the robots must be able to avoid obstacles and reach the target. This research used Fuzzy Logic type 2 – Particle Swarm Optimization algorithm which was compared with Fuzzy Logic type 2 – Modified Particle Swarm Optimization and Fuzzy Logic type 2 – Dynamic Particle Swarm Optimization. Based on the experiments that had been carried out in each environment, it was found that Fuzzy Logic type 2 - Modified Particle Swarm Optimization had better iteration, time and resource and also smoother robot movement than Fuzzy Logic type 2 – Particle Swarm Optimization and Fuzzy Logic Type 2 - Dynamic Particle Swarm Optimization

    Performance Analysis of Genetic Algorithm with PSO for Data Clustering

    Full text link
    Data clustering is widely used in several areas like machine learning, data mining, pattern recognition, image processing and bioinformatics. Clustering is the process of partitioning or grouping of a given set of data into disjoint cluster. Basically there are two types of clustering approaches, one is hierarchical and the other is partitioned. K-means clustering is one of the partitioned types and it suffers from the fact that that it may not be easy to clearly identify the initial K elements. To overcome the problems in K-means Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) techniques came into existence. A Genetic Algorithm (GA) is one of hierarchical approach and can be noted as an optimization technique whose algorithm is based on the mechanics of natural selection and genetics. Particle Swarm Optimization (PSO) is also one of the hierarchical search methods whose mechanics are inspired by the swarming. The PSO algorithm is simple and can be developed in a few lines of code whereas GAs suffers from identifying a current solution but good at reaching a global region. Even though GA and PSO have their own set of strengths they have weaknesses too. So a hybrid approach (GA-PSO) which combines the advantages of GA and PSO are proposed to get a better performance. The hybrid method merges the standard velocity and modernizes rules of PSOs with the thoughts of selection, crossover and mutation from GAs. A comparative study is carried out by analyzing the results like fitness value and elapsed time of GA-PSO to the standard GA and PSO

    A novel statistical cerebrovascular segmentation algorithm with particle swarm optimization

    Get PDF
    AbstractWe present an automatic statistical intensity-based approach to extract the 3D cerebrovascular structure from time-of flight (TOF) magnetic resonance angiography (MRA) data. We use the finite mixture model (FMM) to fit the intensity histogram of the brain image sequence, where the cerebral vascular structure is modeled by a Gaussian distribution function and the other low intensity tissues are modeled by Gaussian and Rayleigh distribution functions. To estimate the parameters of the FMM, we propose an improved particle swarm optimization (PSO) algorithm, which has a disturbing term in speeding updating the formula of PSO to ensure its convergence. We also use the ring shape topology of the particles neighborhood to improve the performance of the algorithm. Computational results on 34 test data show that the proposed method provides accurate segmentation, especially for those blood vessels of small sizes

    An Algorithmic Framework for Multiobjective Optimization

    Get PDF
    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization

    A population-based optimization method using Newton fractal

    Get PDF
    Department of Mathematical SciencesMetaheuristic is a general procedure to draw an agreement in a group based on the decision making of each individual beyond heuristic. For last decade, there have been many attempts to develop metaheuristic methods based on swarm intelligence to solve global optimization such as particle swarm optimizer, ant colony optimizer, firefly optimizer. These methods are mostly stochastic and independent on specific problems. Since metaheuristic methods based on swarm intelligence require no central coordination (or minimal, if any), they are especially well-applicable to those problems which have distributed or parallel structures. Each individual follows few simple rules, keeping the searching cost at a decent level. Despite its simplicity, the methods often yield a fast approximation in good precision, compared to conventional methods. Exploration and exploitation are two important features that we need to consider to find a global optimum in a high dimensional domain, especially when prior information is not given. Exploration is to investigate the unknown space without using the information from history to find undiscovered optimum. Exploitation is to trace the neighborhood of the current best to improve it using the information from history. Because these two concepts are at opposite ends of spectrum, the tradeoff significantly affects the performance at the limited cost of search. In this work, we develop a chaos-based metaheuristic method, ???Newton Particle Optimization(NPO)???, to solve global optimization problems. The method is based on the Newton method which is a well-established mathematical root-finding procedure. It actively utilizes the chaotic nature of the Newton method to place a proper balance between exploration and exploitation. While most current population-based methods adopt stochastic effects to maximize exploration, they often suffer from weak exploitation. In addition, stochastic methods generally show poor reproducing ability and premature convergence. It has been argued that an alternative approach using chaos may mitigate such disadvantages. The unpredictability of chaos is correspondent with the randomness of stochastic methods. Chaos-based methods are deterministic and therefore easy to reproduce the results with less memory. It has been shown that chaos avoids local optimum better than stochastic methods and buffers the premature convergence issue. Newton method is deterministic but shows chaotic movements near the roots. It is such complexity that enables the particles to search the space for global optimization. We initialize the particle???s position randomly at first and choose the ???leading particles??? to attract other particles near them. We can make a polynomial function whose roots are those leading particles, called ???a guiding function???. Then we update the positions of particles using the guiding function by Newton method. Since the roots are not updated by Newton method, the leading particles survive after update. For diverse movements of particles, we use modified newton method, which has a coefficient mm in the variation of movements for each particle. Efficiency in local search is closely related to the value of m which determines the convergence rate of the Newton method. We can control the balance between exploration and exploitation by choice of leading particles. It is interesting that selection of excellent particles as leading particles not always results in the best result. Including mediocre particles in the roots of guiding function maintains the diversity of particles in position. Though diversity seems to be inefficient at first, those particles contribute to the exploration for global search finally. We study the conditions for the convergence of NPO. NPO enjoys the well-established analysis of the Newton method. This contrasts with other ???nature-inspired??? algorithms which have often been criticized for lack of rigorous mathematical ground. We compare the results of NPO with those of two popular metaheuristic methods, particle swarm optimizer(PSO) and firefly optimizer(FO). Though it has been shown that there are no such algorithms superior to all problems by no free lunch theorem, that is why the researchers are concerned about adaptable global optimizer for specific problems. NPO shows good performance to CEC 2013 competition test problems comparing to PSO and FO.ope
    • …
    corecore