64,505 research outputs found

    Novel Artificial Human Optimization Field Algorithms - The Beginning

    Full text link
    New Artificial Human Optimization (AHO) Field Algorithms can be created from scratch or by adding the concept of Artificial Humans into other existing Optimization Algorithms. Particle Swarm Optimization (PSO) has been very popular for solving complex optimization problems due to its simplicity. In this work, new Artificial Human Optimization Field Algorithms are created by modifying existing PSO algorithms with AHO Field Concepts. These Hybrid PSO Algorithms comes under PSO Field as well as AHO Field. There are Hybrid PSO research articles based on Human Behavior, Human Cognition and Human Thinking etc. But there are no Hybrid PSO articles which based on concepts like Human Disease, Human Kindness and Human Relaxation. This paper proposes new AHO Field algorithms based on these research gaps. Some existing Hybrid PSO algorithms are given a new name in this work so that it will be easy for future AHO researchers to find these novel Artificial Human Optimization Field Algorithms. A total of 6 Artificial Human Optimization Field algorithms titled "Human Safety Particle Swarm Optimization (HuSaPSO)", "Human Kindness Particle Swarm Optimization (HKPSO)", "Human Relaxation Particle Swarm Optimization (HRPSO)", "Multiple Strategy Human Particle Swarm Optimization (MSHPSO)", "Human Thinking Particle Swarm Optimization (HTPSO)" and "Human Disease Particle Swarm Optimization (HDPSO)" are tested by applying these novel algorithms on Ackley, Beale, Bohachevsky, Booth and Three-Hump Camel Benchmark Functions. Results obtained are compared with PSO algorithm.Comment: 25 pages, 41 figure

    Human Head Tracking Based on Particle Swarm Optimization and Genetic Algorithm

    Get PDF
    This paper compares particle swarm optimization and a genetic algorithm for perception by a partner robot. The robot requires visual perception to interact with human beings. It should basically extract moving objects using visual perception in interaction with human beings. To reduce computational cost and time consumption, we used differential extraction. We propose human head tracking for a partner robot using particle swarm optimization and a genetic algorithm. Experiments involving two maximum iteration numbers show that particle swarm optimization is more effective in solving this problem than genetic algorithm

    A review of velocity-type PSO variants

    Get PDF
    This paper presents a review of the particular variants of particle swarm optimization, based on the velocity-type class. The original particle swarm optimization algorithm was developed as an unconstrained optimization technique, which lacks a model that is able to handle constrained optimization problems. The particle swarm optimization and its inapplicability in constrained optimization problems are solved using the dynamic-objective constraint-handling method. The dynamic-objective constraint-handling method is originally developed for two variants of the basic particle swarm optimization, namely restricted velocity particle swarm optimization and self-adaptive velocity particle swarm optimization. Also on the subject velocity-type class, a review of three other variants is given, specifically: (1) vertical particle swarm optimization; (2) velocity limited particle swarm optimization; and (3) particle swarm optimization with scape velocity. These velocity-type particle swarm optimization variants all have in common a velocity parameter which determines the direction/movements of the particles.info:eu-repo/semantics/publishedVersio

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Discrete Particle Swarm Optimization for the minimum labelling Steiner tree problem

    Get PDF
    Particle Swarm Optimization is an evolutionary method inspired by the social behaviour of individuals inside swarms in nature. Solutions of the problem are modelled as members of the swarm which fly in the solution space. The evolution is obtained from the continuous movement of the particles that constitute the swarm submitted to the effect of the inertia and the attraction of the members who lead the swarm. This work focuses on a recent Discrete Particle Swarm Optimization for combinatorial optimization, called Jumping Particle Swarm Optimization. Its effectiveness is illustrated on the minimum labelling Steiner tree problem: given an undirected labelled connected graph, the aim is to find a spanning tree covering a given subset of nodes, whose edges have the smallest number of distinct labels

    Triggered memory-based swarm optimization in dynamic environments

    Get PDF
    This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems

    Parameter selection and performance comparison of particle swarm optimization in sensor networks localization

    Get PDF
    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors\u27 memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm
    corecore