9 research outputs found

    When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument

    Full text link
    Hillclimbing is an essential part of any optimization algorithm. An important benchmark for hillclimbing algorithms on pseudo-Boolean functions f:{0,1}n→Rf: \{0,1\}^n \to \mathbb{R} are (strictly) montone functions, on which a surprising number of hillclimbers fail to be efficient. For example, the (1+1)(1+1)-Evolutionary Algorithm is a standard hillclimber which flips each bit independently with probability c/nc/n in each round. Perhaps surprisingly, this algorithm shows a phase transition: it optimizes any monotone pseudo-boolean function in quasilinear time if c<1c<1, but there are monotone functions for which the algorithm needs exponential time if c>2.2c>2.2. But so far it was unclear whether the threshold is at c=1c=1. In this paper we show how Moser's entropy compression argument can be adapted to this situation, that is, we show that a long runtime would allow us to encode the random steps of the algorithm with less bits than their entropy. Thus there exists a c0>1c_0 > 1 such that for all 0<c≀c00<c\le c_0 the (1+1)(1+1)-Evolutionary Algorithm with rate c/nc/n finds the optimum in O(nlog⁥2n)O(n \log^2 n) steps in expectation.Comment: 14 pages, no figure

    On the Computational Cost and Complexity of Stochastic Inverse Solvers

    Get PDF
    The goal of this paper is to provide a starting point for investigations into a mainly underdeveloped area of research regarding the computational cost analysis of complex stochastic strategies for solving parametric inverse problems. This area has two main components: solving global optimization problems and solving forward problems (to evaluate the misfit function that we try to minimize). For the first component, we pay particular attention to genetic algorithms with heuristics and to multi-deme algorithms that can be modeled as ergodic Markov chains. We recall a simple method for evaluating the first hitting time for the single-deme algorithm and we extend it to the case of HGS, a multi-deme hierarchic strategy. We focus on the case in which at least the demes in the leaves are well tuned. Finally, we also express the problems of finding local and global optima in terms of a classic complexity theory. We formulate the natural result that finding a local optimum of a function is an NP-complete task, and we argue that finding a global optimum is a much harder, DP-complete, task. Furthermore, we argue that finding all global optima is, possibly, even harder (#P-hard) task. Regarding the second component of solving parametric inverse problems (i.e., regarding the forward problem solvers), we discuss the computational cost of hp-adaptive Finite Element solvers and their rates of convergence with respect to the increasing number of degrees of freedom. The presented results provide a useful taxonomy of problems and methods of studying the computational cost and complexity of various strategies for solving inverse parametric problems. Yet, we stress that our goal was not to deliver detailed evaluations for particular algorithms applied to particular inverse problems, but rather to try to identify possible ways of obtaining such results

    Optimizing Monotone Functions Can Be Difficult

    No full text
    Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotone. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant cc in the mutation probability p(n)=c/np(n) = c/n can make a decisive difference. We show that if c < 1, then the \EA finds the optimum of every such function in Θ(nlog⁥n)\Theta(n \log n) iterations. For c=1c=1, we can still prove an upper bound of O(n3/2)O(n^{3/2}). However, for c>33c > 33, we present a strictly monotone function such that the \EA with overwhelming probability does not find the optimum within 2Ω(n)2^{\Omega(n)} iterations. This is the first time that we observe that a constant factor change of the mutation probability changes the run-time by more than constant factors

    Toward a complexity theory for randomized search heuristics : black-box models

    Get PDF
    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime bounds exist, a powerful complexity theory for such algorithms is yet to be developed. We contribute to this goal in several aspects. In a first step, we analyze existing black-box complexity models. Our results indicate that these models are not restrictive enough. This remains true if we restrict the memory of the algorithms under consideration. These results motivate us to enrich the existing notions of black-box complexity by the additional restriction that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithms. Many heuristics belong to this class of algorithms. We show that our ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold. Surprisingly, our results have an interesting game-theoretic aspect as well.We show that analyzing the black-box complexity of the OneMaxn function class—a class often regarded to analyze how heuristics progress in easy parts of the search space—is the same as analyzing optimal winning strategies for the generalized Mastermind game with 2 colors and length-n codewords. This connection was seemingly overlooked so far in the search heuristics community.Randomisierte Suchheuristiken sind vielseitig einsetzbare Algorithmen, die aufgrund ihrer hohen FlexibilitĂ€t nicht nur im industriellen Kontext weit verbreitet sind. Trotz zahlreicher erfolgreicher Anwendungsbeispiele steckt die Laufzeitanalyse solcher Heuristiken noch in ihren Kinderschuhen. Insbesondere fehlt es uns an einem guten VerstĂ€ndnis, in welchen Situationen problemunabhĂ€ngige Heuristiken in kurzer Laufzeit gute Lösungen liefern können. Eine KomplexitĂ€tstheorie Ă€hnlich wie es sie in der klassischen Algorithmik gibt, wĂ€re wĂŒnschenswert. Mit dieser Arbeit tragen wir zur Entwicklung einer solchen KomplexitĂ€tstheorie fĂŒr Suchheuristiken bei. Wir zeigen anhand verschiedener Beispiele, dass existierende Modelle die Schwierigkeit eines Problems nicht immer zufriedenstellend erfassen. Wir schlagen daher ein weiteres Modell vor. In unserem Ranking-Based Black-Box Model lernen die Algorithmen keine exakten Funktionswerte, sondern bloß die Rangordnung der bislang angefragten Suchpunkte. Dieses Modell gibt fĂŒr manche Probleme eine bessere EinschĂ€tzung der Schwierigkeit. Wir zeigen jedoch auch, dass auch im neuen Modell Probleme existieren, deren KomplexitĂ€t als zu gering einzuschĂ€tzen ist. Unsere Ergebnisse haben auch einen spieltheoretischen Aspekt. Optimale Gewinnstrategien fĂŒr den Rater im Mastermindspiel (auch SuperHirn) mit n Positionen entsprechen genau optimalen Algorithmen zur Maximierung von OneMaxn-Funktionen. Dieser Zusammenhang wurde scheinbar bislang ĂŒbersehen. Diese Arbeit ist in englischer Sprache verfasst

    Theoretical foundations of artificial immune systems

    Get PDF
    Artificial immune systems (AIS) are a special class of biologically inspired algorithms, which are based on the immune system of vertebrates. The field constitutes a relatively new and emerging area of research in Computational Intelligence that has achieved various promising results in different areas of application, e.g., learning, classification, anomaly detection, and (function) optimization. An increasing and often stated problem of the field is the lack of a theoretical basis for AIS as most work so far only concentrated on the direct application of immune principles. In this thesis, we concentrate on optimization applications of AIS. It can easily be recognized that with respect to this application area, the work done previously mainly covers convergence analysis. To the best of our knowledge this thesis constitutes the first rigorous runtime analyses of immune-inspired operators and thus adds substantially to the demanded theoretical foundation of AIS. We consider two very common aspects of AIS. On one hand, we provide a theoretical analysis for different hypermutation operators frequently employed in AIS. On the other hand, we examine a popular diversity mechanism named aging. We compare our findings with corresponding results from the analysis of other nature-inspired randomized search heuristics, in particular evolutionary algorithms. Moreover, we focus on the practical implications of our theoretical results in order to bridge the gap between theory and practice. Therefore, we derive guidelines for parameter settings and point out typical situations where certain concepts seem promising. These analyses contribute to the understanding of how AIS actually work and in which applications they excel other randomized search heuristics
    corecore