12 research outputs found
Playing Mastermind With Constant-Size Memory
We analyze the classic board game of Mastermind with holes and a constant
number of colors. A result of Chv\'atal (Combinatorica 3 (1983), 325-329)
states that the codebreaker can find the secret code with
questions. We show that this bound remains valid if the codebreaker may only
store a constant number of guesses and answers. In addition to an intrinsic
interest in this question, our result also disproves a conjecture of Droste,
Jansen, and Wegener (Theory of Computing Systems 39 (2006), 525-544) on the
memory-restricted black-box complexity of the OneMax function class.Comment: 23 page
Complexity Theory for Discrete Black-Box Optimization Heuristics
A predominant topic in the theory of evolutionary algorithms and, more
generally, theory of randomized black-box optimization techniques is running
time analysis. Running time analysis aims at understanding the performance of a
given heuristic on a given problem by bounding the number of function
evaluations that are needed by the heuristic to identify a solution of a
desired quality. As in general algorithms theory, this running time perspective
is most useful when it is complemented by a meaningful complexity theory that
studies the limits of algorithmic solutions.
In the context of discrete black-box optimization, several black-box
complexity models have been developed to analyze the best possible performance
that a black-box optimization algorithm can achieve on a given problem. The
models differ in the classes of algorithms to which these lower bounds apply.
This way, black-box complexity contributes to a better understanding of how
certain algorithmic choices (such as the amount of memory used by a heuristic,
its selective pressure, or properties of the strategies that it uses to create
new solution candidates) influences performance.
In this chapter we review the different black-box complexity models that have
been proposed in the literature, survey the bounds that have been obtained for
these models, and discuss how the interplay of running time analysis and
black-box complexity can inspire new algorithmic solutions to well-researched
problems in evolutionary computation. We also discuss in this chapter several
interesting open questions for future work.Comment: This survey article is to appear (in a slightly modified form) in the
book "Theory of Randomized Search Heuristics in Discrete Search Spaces",
which will be published by Springer in 2018. The book is edited by Benjamin
Doerr and Frank Neumann. Missing numbers of pointers to other chapters of
this book will be added as soon as possibl
OneMax in Black-Box Models with Several Restrictions
Black-box complexity studies lower bounds for the efficiency of
general-purpose black-box optimization algorithms such as evolutionary
algorithms and other search heuristics. Different models exist, each one being
designed to analyze a different aspect of typical heuristics such as the memory
size or the variation operators in use. While most of the previous works focus
on one particular such aspect, we consider in this work how the combination of
several algorithmic restrictions influence the black-box complexity. Our
testbed are so-called OneMax functions, a classical set of test functions that
is intimately related to classic coin-weighing problems and to the board game
Mastermind.
We analyze in particular the combined memory-restricted ranking-based
black-box complexity of OneMax for different memory sizes. While its isolated
memory-restricted as well as its ranking-based black-box complexity for bit
strings of length is only of order , the combined model does not
allow for algorithms being faster than linear in , as can be seen by
standard information-theoretic considerations. We show that this linear bound
is indeed asymptotically tight. Similar results are obtained for other memory-
and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of
OneMax in the recently introduced elitist model, in which only the best-so-far
solution can be kept in the memory. Finally, we also provide improved lower
bounds for the complexity of OneMax in the regarded models.
Our result enlivens the quest for natural evolutionary algorithms optimizing
OneMax in iterations.Comment: This is the full version of a paper accepted to GECCO 201
Toward a complexity theory for randomized search heuristics : black-box models
Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime bounds exist, a powerful complexity theory for such algorithms is yet to be developed. We contribute to this goal in several aspects. In a first step, we analyze existing black-box complexity models. Our results indicate that these models are not restrictive enough. This remains true if we restrict the memory of the algorithms under consideration. These results motivate us to enrich the existing notions of black-box complexity by the additional restriction that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithms. Many heuristics belong to this class of algorithms. We show that our ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold. Surprisingly, our results have an interesting game-theoretic aspect as well.We show that analyzing the black-box complexity of the OneMaxn function classâa class often regarded to analyze how heuristics progress in easy parts of the search spaceâis the same as analyzing optimal winning strategies for the generalized Mastermind game with 2 colors and length-n codewords. This connection was seemingly overlooked so far in the search heuristics community.Randomisierte Suchheuristiken sind vielseitig einsetzbare Algorithmen, die aufgrund ihrer hohen FlexibilitĂ€t nicht nur im industriellen Kontext weit verbreitet sind. Trotz zahlreicher erfolgreicher Anwendungsbeispiele steckt die Laufzeitanalyse solcher Heuristiken noch in ihren Kinderschuhen. Insbesondere fehlt es uns an einem guten VerstĂ€ndnis, in welchen Situationen problemunabhĂ€ngige Heuristiken in kurzer Laufzeit gute Lösungen liefern können. Eine KomplexitĂ€tstheorie Ă€hnlich wie es sie in der klassischen Algorithmik gibt, wĂ€re wĂŒnschenswert. Mit dieser Arbeit tragen wir zur Entwicklung einer solchen KomplexitĂ€tstheorie fĂŒr Suchheuristiken bei. Wir zeigen anhand verschiedener Beispiele, dass existierende Modelle die Schwierigkeit eines Problems nicht immer zufriedenstellend erfassen. Wir schlagen daher ein weiteres Modell vor. In unserem Ranking-Based Black-Box Model lernen die Algorithmen keine exakten Funktionswerte, sondern bloĂ die Rangordnung der bislang angefragten Suchpunkte. Dieses Modell gibt fĂŒr manche Probleme eine bessere EinschĂ€tzung der Schwierigkeit. Wir zeigen jedoch auch, dass auch im neuen Modell Probleme existieren, deren KomplexitĂ€t als zu gering einzuschĂ€tzen ist. Unsere Ergebnisse haben auch einen spieltheoretischen Aspekt. Optimale Gewinnstrategien fĂŒr den Rater im Mastermindspiel (auch SuperHirn) mit n Positionen entsprechen genau optimalen Algorithmen zur Maximierung von OneMaxn-Funktionen. Dieser Zusammenhang wurde scheinbar bislang ĂŒbersehen. Diese Arbeit ist in englischer Sprache verfasst
A Fitness Function Elimination Theory For Blackbox Optimization And Problem Class Learning
The modern view of optimization is that optimization algorithms are not designed in a vacuum, but can make use of information regarding the broad class of objective functions from which a problem instance is drawn. Using this knowledge, we want to design optimization algorithms that execute quickly (efficiency), solve the objective function with minimal samples (performance), and are applicable over a wide range of problems (abstraction). However, we present a new theory for blackbox optimization from which, we conclude that of these three desired characteristics, only two can be maximized by any algorithm. We put forward an alternate view of optimization where we use knowledge about the problem class and samples from the problem instance to identify which problem instances from the class are being solved. From this Elimination of Fitness Functions approach, an idealized optimization algorithm that minimizes sample counts over any problem class, given complete knowledge about the class, is designed. This theory allows us to learn more about the difficulty of various problems, and we are able to use it to develop problem complexity bounds. We present general methods to model this algorithm over a particular problem class and gain efficiency at the cost of specifically targeting that class. This is demonstrated over the Generalized Leading-Ones problem and a generalization called LOââ , and efficient algorithms with optimal performance are derived and analyzed. We also iii tighten existing bounds for LOâââ. Additionally, we present a probabilistic framework based on our Elimination of Fitness Functions approach that clarifies how one can ideally learn about the problem class we face from the objective functions. This problem learning increases the performance of an optimization algorithm at the cost of abstraction. In the context of this theory, we re-examine the blackbox framework as an algorithm design framework and suggest several improvements to existing methods, including incorporating problem learning, not being restricted to blackbox framework and building parametrized algorithms. We feel that this theory and our recommendations will help a practitioner make substantially better use of all that is available in typical practical optimization algorithm design scenarios