7 research outputs found

    Evolutionary Algorithms for Quantum Computers

    Get PDF
    In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212-219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1)Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1)Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms

    New upper and lower bounds for randomized and quantum local search

    No full text
    Local Search problem, which finds a local minimum of a black-box function on a given graph, is of both practical and theoretical importance to combinatorial optimization, complexity theory and many other areas in theoretical computer science. In this paper, we study the problem in both the randomized and the quantum query models and give new lower and upper bound techniques in both models. The lower bound technique works for any graph that contains a product graph as a subgraph. Applying it to the Boolean hypercube {0,1} n and the constant dimensional grids [n] d, two particular product graphs that recently drew much attention, we get the following tight results: RLS({0,1} n) = Θ(2 n/2 n 1/2), QLS({0,1} n) = Θ(2 n/3 n 1/6), RLS([n] d) = Θ(n d/2) for d ≄ 4, QLS([n] d) = Θ(n d/3) for d ≄ 6. Here RLS(G) and QLS(G) are the randomized and quantum query complexities of Local Search on G, respectively. These improve the previous results by Aaronson [2], Ambainis (unpublished) and Santha and Szegedy[20]. Our new algorithms work well when the underlying graph expands slowly. As an application to [n] 2, a new quantum algorithm using O ( √ n(log log n) 1.5) queries is given. This improves the previous best known upper bound of O(n 2/3) (Aaronson, [2]), and implies that Local Search on grids exhibits different properties in low dimensions

    Toward a complexity theory for randomized search heuristics : black-box models

    Get PDF
    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime bounds exist, a powerful complexity theory for such algorithms is yet to be developed. We contribute to this goal in several aspects. In a first step, we analyze existing black-box complexity models. Our results indicate that these models are not restrictive enough. This remains true if we restrict the memory of the algorithms under consideration. These results motivate us to enrich the existing notions of black-box complexity by the additional restriction that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithms. Many heuristics belong to this class of algorithms. We show that our ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold. Surprisingly, our results have an interesting game-theoretic aspect as well.We show that analyzing the black-box complexity of the OneMaxn function class—a class often regarded to analyze how heuristics progress in easy parts of the search space—is the same as analyzing optimal winning strategies for the generalized Mastermind game with 2 colors and length-n codewords. This connection was seemingly overlooked so far in the search heuristics community.Randomisierte Suchheuristiken sind vielseitig einsetzbare Algorithmen, die aufgrund ihrer hohen FlexibilitĂ€t nicht nur im industriellen Kontext weit verbreitet sind. Trotz zahlreicher erfolgreicher Anwendungsbeispiele steckt die Laufzeitanalyse solcher Heuristiken noch in ihren Kinderschuhen. Insbesondere fehlt es uns an einem guten VerstĂ€ndnis, in welchen Situationen problemunabhĂ€ngige Heuristiken in kurzer Laufzeit gute Lösungen liefern können. Eine KomplexitĂ€tstheorie Ă€hnlich wie es sie in der klassischen Algorithmik gibt, wĂ€re wĂŒnschenswert. Mit dieser Arbeit tragen wir zur Entwicklung einer solchen KomplexitĂ€tstheorie fĂŒr Suchheuristiken bei. Wir zeigen anhand verschiedener Beispiele, dass existierende Modelle die Schwierigkeit eines Problems nicht immer zufriedenstellend erfassen. Wir schlagen daher ein weiteres Modell vor. In unserem Ranking-Based Black-Box Model lernen die Algorithmen keine exakten Funktionswerte, sondern bloß die Rangordnung der bislang angefragten Suchpunkte. Dieses Modell gibt fĂŒr manche Probleme eine bessere EinschĂ€tzung der Schwierigkeit. Wir zeigen jedoch auch, dass auch im neuen Modell Probleme existieren, deren KomplexitĂ€t als zu gering einzuschĂ€tzen ist. Unsere Ergebnisse haben auch einen spieltheoretischen Aspekt. Optimale Gewinnstrategien fĂŒr den Rater im Mastermindspiel (auch SuperHirn) mit n Positionen entsprechen genau optimalen Algorithmen zur Maximierung von OneMaxn-Funktionen. Dieser Zusammenhang wurde scheinbar bislang ĂŒbersehen. Diese Arbeit ist in englischer Sprache verfasst
    corecore