294 research outputs found

    Optimal Algorithms for 2 x nAB Games--A Graph-Partition Approach

    Get PDF
    [[abstract]]This paper presents new and systematic methodologies to analyze deductive games and obtain optimal algorithms for 2 ? n AB games, where n ? 2. We have invented a graphic model to represent the game-guessing process. With this novel approach, we find some symmetric and recursive structures in the process. This not only reduces the size of the search space, but also helps us to derive the optimum strategies more efficiently. By using this technique, we develop optimal strategies for 2 ? n AB games in the expected and worst cases, and are able to derive the following new results: (1) ?n/2? + 1 guesses are necessary and sufficient for 2 ? n AB games in the worst case, (2) the minimum number of guesses required for 2 ? n AB games in the expected case is (4n3 + 21n2 - 76n + 72)/12n(n - 1) if n is even, and (4n3 + 21n2 - 82n + 105)/12n(n - 1) if n is odd. The optimization of this problem bears resemblance with other computational problems, such as circuit testing, differential cryptanalysis, on-line models with equivalent queries, and additive search problems. Any conclusion of this kind of deductive game may be applied, although probably not directly, to any of these problems, as well as to any other combinatorial optimization problem.

    Reducing the Arity in Unbiased Black-Box Complexity

    Full text link
    We show that for all 1<klogn1<k \leq \log n the kk-ary unbiased black-box complexity of the nn-dimensional \onemax function class is O(n/k)O(n/k). This indicates that the power of higher arity operators is much stronger than what the previous O(n/logk)O(n/\log k) bound by Doerr et al. (Faster black-box algorithms through higher arity operators, Proc. of FOGA 2011, pp. 163--172, ACM, 2011) suggests. The key to this result is an encoding strategy, which might be of independent interest. We show that, using kk-ary unbiased variation operators only, we may simulate an unrestricted memory of size O(2k)O(2^k) bits.Comment: An extended abstract of this paper has been accepted for inclusion in the proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2012

    Learning Character Strings via Mastermind Queries, with a Case Study Involving mtDNA

    Full text link
    We study the degree to which a character string, QQ, leaks details about itself any time it engages in comparison protocols with a strings provided by a querier, Bob, even if those protocols are cryptographically guaranteed to produce no additional information other than the scores that assess the degree to which QQ matches strings offered by Bob. We show that such scenarios allow Bob to play variants of the game of Mastermind with QQ so as to learn the complete identity of QQ. We show that there are a number of efficient implementations for Bob to employ in these Mastermind attacks, depending on knowledge he has about the structure of QQ, which show how quickly he can determine QQ. Indeed, we show that Bob can discover QQ using a number of rounds of test comparisons that is much smaller than the length of QQ, under reasonable assumptions regarding the types of scores that are returned by the cryptographic protocols and whether he can use knowledge about the distribution that QQ comes from. We also provide the results of a case study we performed on a database of mitochondrial DNA, showing the vulnerability of existing real-world DNA data to the Mastermind attack.Comment: Full version of related paper appearing in IEEE Symposium on Security and Privacy 2009, "The Mastermind Attack on Genomic Data." This version corrects the proofs of what are now Theorems 2 and 4

    OneMax in Black-Box Models with Several Restrictions

    Full text link
    Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length nn is only of order n/lognn/\log n, the combined model does not allow for algorithms being faster than linear in nn, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in o(nlogn)o(n \log n) iterations.Comment: This is the full version of a paper accepted to GECCO 201
    corecore