131,633 research outputs found
Human Head Tracking Based on Particle Swarm Optimization and Genetic Algorithm
This paper compares particle swarm optimization and a genetic algorithm for perception by a partner robot. The robot requires visual perception to interact with human beings. It should basically extract moving objects using visual perception in interaction with human beings. To reduce computational cost and time consumption, we used differential extraction. We propose human head tracking for a partner robot using particle swarm optimization and a genetic algorithm. Experiments involving two maximum iteration numbers show that particle swarm optimization is more effective in solving this problem than genetic algorithm
Novel Artificial Human Optimization Field Algorithms - The Beginning
New Artificial Human Optimization (AHO) Field Algorithms can be created from
scratch or by adding the concept of Artificial Humans into other existing
Optimization Algorithms. Particle Swarm Optimization (PSO) has been very
popular for solving complex optimization problems due to its simplicity. In
this work, new Artificial Human Optimization Field Algorithms are created by
modifying existing PSO algorithms with AHO Field Concepts. These Hybrid PSO
Algorithms comes under PSO Field as well as AHO Field. There are Hybrid PSO
research articles based on Human Behavior, Human Cognition and Human Thinking
etc. But there are no Hybrid PSO articles which based on concepts like Human
Disease, Human Kindness and Human Relaxation. This paper proposes new AHO Field
algorithms based on these research gaps. Some existing Hybrid PSO algorithms
are given a new name in this work so that it will be easy for future AHO
researchers to find these novel Artificial Human Optimization Field Algorithms.
A total of 6 Artificial Human Optimization Field algorithms titled "Human
Safety Particle Swarm Optimization (HuSaPSO)", "Human Kindness Particle Swarm
Optimization (HKPSO)", "Human Relaxation Particle Swarm Optimization (HRPSO)",
"Multiple Strategy Human Particle Swarm Optimization (MSHPSO)", "Human Thinking
Particle Swarm Optimization (HTPSO)" and "Human Disease Particle Swarm
Optimization (HDPSO)" are tested by applying these novel algorithms on Ackley,
Beale, Bohachevsky, Booth and Three-Hump Camel Benchmark Functions. Results
obtained are compared with PSO algorithm.Comment: 25 pages, 41 figure
Triggered memory-based swarm optimization in dynamic environments
This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems
Discrete Particle Swarm Optimization for the minimum labelling Steiner tree problem
Particle Swarm Optimization is an evolutionary method inspired by the
social behaviour of individuals inside swarms in nature. Solutions of the problem are
modelled as members of the swarm which fly in the solution space. The evolution is
obtained from the continuous movement of the particles that constitute the swarm
submitted to the effect of the inertia and the attraction of the members who lead the
swarm. This work focuses on a recent Discrete Particle Swarm Optimization for combinatorial optimization, called Jumping Particle Swarm Optimization. Its effectiveness is
illustrated on the minimum labelling Steiner tree problem: given an undirected labelled
connected graph, the aim is to find a spanning tree covering a given subset of nodes,
whose edges have the smallest number of distinct labels
Global Optimization by Particle Swarm Method:A Fortran Program
Programs that work very well in optimizing convex functions very often perform poorly when the problem has multiple local minima or maxima. They are often caught or trapped in the local minima/maxima. Several methods have been developed to escape from being caught in such local optima. The Particle Swarm Method of global optimization is one of such methods. A swarm of birds or insects or a school of fish searches for food, protection, etc. in a very typical manner. If one of the members of the swarm sees a desirable path to go, the rest of the swarm will follow quickly. Every member of the swarm searches for the best in its locality - learns from its own experience. Additionally, each member learns from the others, typically from the best performer among them. Even human beings show a tendency to learn from their own experience, their immediate neighbours and the ideal performers. The Particle Swarm method of optimization mimics this behaviour. Every individual of the swarm is considered as a particle in a multidimensional space that has a position and a velocity. These particles fly through hyperspace and remember the best position that they have seen. Members of a swarm communicate good positions to each other and adjust their own position and velocity based on these good positions. The Particle Swarm method of optimization testifies the success of bounded rationality and decentralized decisionmaking in reaching at the global optima. It has been used successfully to optimize extremely difficult multimodal functions. Here we give a FORTRAN program to find the global optimum by the Repulsive Particle Swarm method. The program has been tested on over 90 benchmark functions of varied dimensions, complexities and difficulty levels.Bounded rationality; Decentralized decision making; Jacobian; Elliptic functions; Gielis super-formula; supershapes; Repulsive Particle Swarm method of Global optimization; nonlinear programming; multiple sub-optimum; global; local optima; fit; data; empirical; estimation; parameters; curve fitting
Fast multi-swarm optimization for dynamic optimization problems
This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn the real world, many applications are non-stationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems
- …
