6 research outputs found
The Sampling-and-Learning Framework: A Statistical View of Evolutionary Algorithms
Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open
Performance Analysis of Evolutionary Algorithms for the Minimum Label Spanning Tree Problem
Some experimental investigations have shown that evolutionary algorithms
(EAs) are efficient for the minimum label spanning tree (MLST) problem.
However, we know little about that in theory. As one step towards this issue,
we theoretically analyze the performances of the (1+1) EA, a simple version of
EAs, and a multi-objective evolutionary algorithm called GSEMO on the MLST
problem. We reveal that for the MLST problem the (1+1) EA and GSEMO
achieve a -approximation ratio in expected polynomial times of
the number of nodes and the number of labels. We also show that GSEMO
achieves a -approximation ratio for the MLST problem in expected
polynomial time of and . At the same time, we show that the (1+1) EA and
GSEMO outperform local search algorithms on three instances of the MLST
problem. We also construct an instance on which GSEMO outperforms the (1+1) EA
On the approximation ability of evolutionary optimization with application to minimum set cover
Evolutionary algorithms (EAs) are heuristic algorithms inspired by natural
evolution. They are often used to obtain satisficing solutions in practice. In
this paper, we investigate a largely underexplored issue: the approximation
performance of EAs in terms of how close the solution obtained is to an optimal
solution. We study an EA framework named simple EA with isolated population
(SEIP) that can be implemented as a single- or multi-objective EA. We analyze
the approximation performance of SEIP using the partial ratio, which
characterizes the approximation ratio that can be guaranteed. Specifically, we
analyze SEIP using a set cover problem that is NP-hard. We find that in a
simple configuration, SEIP efficiently achieves an -approximation ratio,
the asymptotic lower bound, for the unbounded set cover problem. We also find
that SEIP efficiently achieves an -approximation
ratio, the currently best-achievable result, for the k-set cover problem.
Moreover, for an instance class of the k-set cover problem, we disclose how
SEIP, using either one-bit or bit-wise mutation, can overcome the difficulty
that limits the greedy algorithm