48,442 research outputs found
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Metaheuristic algorithms are often nature-inspired, and they are becoming
very powerful in solving global optimization problems. More than a dozen of
major metaheuristic algorithms have been developed over the last three decades,
and there exist even more variants and hybrid of metaheuristics. This paper
intends to provide an overview of nature-inspired metaheuristic algorithms,
from a brief history to their applications. We try to analyze the main
components of these algorithms and how and why they works. Then, we intend to
provide a unified view of metaheuristics by proposing a generalized
evolutionary walk algorithm (GEWA). Finally, we discuss some of the important
open questions.Comment: 14 page
An Investigation into the Merger of Stochastic Diffusion Search and Particle Swarm Optimisation
This study reports early research aimed at applying the powerful resource allocation mechanism deployed in Stochastic Diffusion Search (SDS) to the Particle Swarm Optimiser (PSO) metaheuristic, effectively merging the two swarm intelligence algorithms. The results reported herein suggest that the hybrid algorithm, exploiting information sharing between particles, has the potential to improve the optimisation capability of conventional PSOs
A New Mechanism for Maintaining Diversity of Pareto Archive in Multiobjective Optimization
The article introduces a new mechanism for selecting individuals to a Pareto
archive. It was combined with a micro-genetic algorithm and tested on several
problems. The ability of this approach to produce individuals uniformly
distributed along the Pareto set without negative impact on convergence is
demonstrated on presented results. The new concept was confronted with NSGA-II,
SPEA2, and IBEA algorithms from the PISA package. Another studied effect is the
size of population versus number of generations for small populations.Comment: 51 pages, 28 figure
Recommended from our members
Incremental evolution strategy for function optimization
This paper presents a novel evolutionary approach for function optimization Incremental Evolution Strategy (IES). Two strategies are proposed. One is to evolve the input variables incrementally. The whole evolution consists of several phases and one more variable is focused in each phase. The number of phases is equal to the number of variables in maximum. Each phase is composed of two stages: in the single-variable evolution (SVE) stage, evolution is taken on one independent variable in a series of cutting planes; in the multi-variable evolving (MVE) stage, the initial population is formed by integrating the populations obtained by the SVE and the MVE in the last phase. And the evolution is taken on the incremented variable set. The other strategy is a hybrid of particle swarm optimization (PSO) and evolution strategy (ES). PSO is applied to adjust the cutting planes/hyper-planes (in SVEs/MVEs) while (1+1)-ES is applied to searching optima in the cutting planes/hyper-planes. The results of experiments show that the performance of IES is generally better than that of three other evolutionary algorithms, improved normal GA, PSO and SADE_CERAF, in the sense that IES finds solutions closer to the true optima and with more optimal objective values
Hybrid behavioural-based multi-objective space trajectory optimization
In this chapter we present a hybridization of a stochastic based search approach for multi-objective optimization with a deterministic domain decomposition of the solution space. Prior to the presentation of the algorithm we introduce a general formulation of the optimization problem that is suitable to describe both single and multi-objective problems. The stochastic approach, based on behaviorism, combinedwith the decomposition of the solutions pace was tested on a set of standard multi-objective optimization problems and on a simple but representative case of space trajectory design
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (āefficientā) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find āquicklyā (reasonable run-times), with āhighā probability, provable āgoodā solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
- ā¦