24,747 research outputs found
Parallel transfer evolution algorithm
Parallelization of an evolutionary algorithm takes the advantage of modular population division and information exchange among multiple processors. However, existing parallel evolutionary algorithms are rather ad hoc and lack a capability of adapting to diverse problems. To accommodate a wider range of problems and to reduce algorithm design costs, this paper develops a parallel transfer evolution algorithm. It is based on the island-model of parallel evolutionary algorithm and, for improving performance, transfers both the connections and the evolutionary operators from one sub-population pair to another adaptively. Needing no extra upper selection strategy, each sub-population is able to select autonomously evolutionary operators and local search operators as subroutines according to both the sub-population's own and the connected neighbor's ranking boards. The parallel transfer evolution is tested on two typical combinatorial optimization problems in comparison with six existing ad-hoc evolutionary algorithms, and is also applied to a real-world case study in comparison with five typical parallel evolutionary algorithms. The tests show that the proposed scheme and the resultant PEA offer high flexibility in dealing with a wider range of combinatorial optimization problems without algorithmic modification or redesign. Both the topological transfer and the algorithmic transfer are seen applicable not only to combinatorial optimization problems, but also to non-permutated complex problems
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Denoising Autoencoders for fast Combinatorial Black Box Optimization
Estimation of Distribution Algorithms (EDAs) require flexible probability
models that can be efficiently learned and sampled. Autoencoders (AE) are
generative stochastic networks with these desired properties. We integrate a
special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate
the performance of DAE-EDA on several combinatorial optimization problems with
a single objective. We asses the number of fitness evaluations as well as the
required CPU times. We compare the results to the performance to the Bayesian
Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a
generative neural network which has proven competitive with BOA. For the
considered problem instances, DAE-EDA is considerably faster than BOA and
RBM-EDA, sometimes by orders of magnitude. The number of fitness evaluations is
higher than for BOA, but competitive with RBM-EDA. These results show that DAEs
can be useful tools for problems with low but non-negligible fitness evaluation
costs.Comment: corrected typos and small inconsistencie
Döntéstámogatás fuzzy módszerekkel, optimalizálással = Decision support with fuzzy methods and optimization.
2003 és 2006 közt kutatásokat két irányban folytattam: fuzzy osztályozás és optimalizálás evolúciós algoritmussal (EA). A fuzzy témakörben egy EA alapú fuzzy osztályozó algoritmust publikáltam. Az optimalizálás témakörben több algoritmust fejlesztettem, melyek különböző EA struktúrákat, ill. memória alapú technikákat alkalmaztak. Több skalár optimalizálás algoritmust publikáltam: nem-lineáris problémák, kombinatorikus optimalizálás (QAP, 3-SAT, BQP, TSP). Vektoroptimalizálás témakörben algoritmusaim: nem-lineáris optimalizálás, kombinatorikus problémák (QAP, TSP). Több algoritmusom eredménye hasonló, vagy jobb minőségű, mint a megjelenéskor ismert más EA módszerek eredménye. | Between 2003 and 2006 my research had two directions: fuzzy classification and optimization with evolutionary algorithm (EA). In the fuzzy topic I published a fuzzy classification algorithm based on EA. In the optimization topic I developed algorithms with different EA structures and used memory based techniques. I published more single-objective algorithms, e.g. non-linear optimization with parallel EAs, combinatorial optimization (QAP, 3-SAT, BQP, TSP). I published some multi-objective algorithms: a non-linear optimization method and combinatorial optimization (QAP, TSP). By some of my published algorithms the results had similar or better quality than the other EA in the same topics
- …