74,771 research outputs found
Novel Artificial Human Optimization Field Algorithms - The Beginning
New Artificial Human Optimization (AHO) Field Algorithms can be created from
scratch or by adding the concept of Artificial Humans into other existing
Optimization Algorithms. Particle Swarm Optimization (PSO) has been very
popular for solving complex optimization problems due to its simplicity. In
this work, new Artificial Human Optimization Field Algorithms are created by
modifying existing PSO algorithms with AHO Field Concepts. These Hybrid PSO
Algorithms comes under PSO Field as well as AHO Field. There are Hybrid PSO
research articles based on Human Behavior, Human Cognition and Human Thinking
etc. But there are no Hybrid PSO articles which based on concepts like Human
Disease, Human Kindness and Human Relaxation. This paper proposes new AHO Field
algorithms based on these research gaps. Some existing Hybrid PSO algorithms
are given a new name in this work so that it will be easy for future AHO
researchers to find these novel Artificial Human Optimization Field Algorithms.
A total of 6 Artificial Human Optimization Field algorithms titled "Human
Safety Particle Swarm Optimization (HuSaPSO)", "Human Kindness Particle Swarm
Optimization (HKPSO)", "Human Relaxation Particle Swarm Optimization (HRPSO)",
"Multiple Strategy Human Particle Swarm Optimization (MSHPSO)", "Human Thinking
Particle Swarm Optimization (HTPSO)" and "Human Disease Particle Swarm
Optimization (HDPSO)" are tested by applying these novel algorithms on Ackley,
Beale, Bohachevsky, Booth and Three-Hump Camel Benchmark Functions. Results
obtained are compared with PSO algorithm.Comment: 25 pages, 41 figure
Human Head Tracking Based on Particle Swarm Optimization and Genetic Algorithm
This paper compares particle swarm optimization and a genetic algorithm for perception by a partner robot. The robot requires visual perception to interact with human beings. It should basically extract moving objects using visual perception in interaction with human beings. To reduce computational cost and time consumption, we used differential extraction. We propose human head tracking for a partner robot using particle swarm optimization and a genetic algorithm. Experiments involving two maximum iteration numbers show that particle swarm optimization is more effective in solving this problem than genetic algorithm
Adaptive particle swarm optimization
An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity
Discrete Particle Swarm Optimization for the minimum labelling Steiner tree problem
Particle Swarm Optimization is an evolutionary method inspired by the
social behaviour of individuals inside swarms in nature. Solutions of the problem are
modelled as members of the swarm which fly in the solution space. The evolution is
obtained from the continuous movement of the particles that constitute the swarm
submitted to the effect of the inertia and the attraction of the members who lead the
swarm. This work focuses on a recent Discrete Particle Swarm Optimization for combinatorial optimization, called Jumping Particle Swarm Optimization. Its effectiveness is
illustrated on the minimum labelling Steiner tree problem: given an undirected labelled
connected graph, the aim is to find a spanning tree covering a given subset of nodes,
whose edges have the smallest number of distinct labels
Least Squares Fitting of Chacón-Gielis Curves by the Particle Swarm Method of Optimization
Ricardo Chacón generalized Johan Gielis's superformula by introducing elliptic functions in place of trigonometric functions. In this paper an attempt has been made to fit the Chacón-Gielis curves (modified by various functions) to simulated data by the least squares principle. Estimation has been done by the Particle Swarm (PS) methods of global optimization. The Repulsive Particle Swarm optimization algorithm has been used. It has been found that although the curve-fitting exercise may be satisfactory, a lack of uniqueness of Chacón-Gielis parameters to data (from which they are estimated) poses an insurmountable difficulty to interpretation of findings.Least squares multimodal nonlinear curve-fitting; Ricardo Chacón; Jacobian Elliptic functions; Weierstrass ; Gielis super-formula; supershapes; Particle Swarm method; Repulsive Particle Swarm method of Global optimization; nonlinear programming; multiple sub-optima; global; local optima; fit; empirical; estimation; cellular automata; fractals
Triggered memory-based swarm optimization in dynamic environments
This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems
Do not be afraid of local minima: affine shaker and particle swarm
Stochastic local search techniques are powerful and flexible methods to optimize difficult functions. While each method is characterized by search trajectories produced through a randomized selection of the next step, a notable difference is caused by the interaction of different searchers, as exemplified by the Particle Swarm methods. In this paper we evaluate two extreme approaches, Particle Swarm Optimization, with interaction between the individual "cognitive" component and the "social" knowledge, and Repeated Affine Shaker, without any interaction between searchers but with an aggressive capability of scouting out local minima. The results, unexpected to the authors, show that Affine Shaker provides remarkably efficient and effective results when compared with PSO, while the advantage of Particle Swarm is visible only for functions with a very regular structure of the local minima leading to the global optimum and only for specific experimental conditions
A review of velocity-type PSO variants
This paper presents a review of the particular variants of particle swarm optimization, based on the velocity-type class.
The original particle swarm optimization algorithm was developed as an unconstrained optimization technique, which
lacks a model that is able to handle constrained optimization problems. The particle swarm optimization and its
inapplicability in constrained optimization problems are solved using the dynamic-objective constraint-handling
method. The dynamic-objective constraint-handling method is originally developed for two variants of the basic particle
swarm optimization, namely restricted velocity particle swarm optimization and self-adaptive velocity particle swarm
optimization. Also on the subject velocity-type class, a review of three other variants is given, specifically: (1) vertical
particle swarm optimization; (2) velocity limited particle swarm optimization; and (3) particle swarm optimization with
scape velocity. These velocity-type particle swarm optimization variants all have in common a velocity parameter which
determines the direction/movements of the particles.info:eu-repo/semantics/publishedVersio
- …