4,517 research outputs found
Free Lunch for Optimisation under the Universal Distribution
Function optimisation is a major challenge in computer science. The No Free
Lunch theorems state that if all functions with the same histogram are assumed
to be equally probable then no algorithm outperforms any other in expectation.
We argue against the uniform assumption and suggest a universal prior exists
for which there is a free lunch, but where no particular class of functions is
favoured over another. We also prove upper and lower bounds on the size of the
free lunch
No-Free-Lunch Theorems in the continuum
No-Free-Lunch Theorems state, roughly speaking, that the performance of all
search algorithms is the same when averaged over all possible objective
functions. This fact was precisely formulated for the first time in a now
famous paper by Wolpert and Macready, and then subsequently refined and
extended by several authors, always in the context of a set of functions with
discrete domain and codomain. Recently, Auger and Teytaud have shown that for
continuum domains there is typically no No-Free-Lunch theorems. In this paper
we provide another approach, which is simpler, requires less assumptions,
relates the discrete and continuum cases, and that we believe that clarifies
the role of the cardinality and structure of the domain
The Sampling-and-Learning Framework: A Statistical View of Evolutionary Algorithms
Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open
- …