7 research outputs found

    Minimum and maximum against k lies

    Full text link
    A neat 1972 result of Pohl asserts that [3n/2]-2 comparisons are sufficient, and also necessary in the worst case, for finding both the minimum and the maximum of an n-element totally ordered set. The set is accessed via an oracle for pairwise comparisons. More recently, the problem has been studied in the context of the Renyi-Ulam liar games, where the oracle may give up to k false answers. For large k, an upper bound due to Aigner shows that (k+O(\sqrt{k}))n comparisons suffice. We improve on this by providing an algorithm with at most (k+1+C)n+O(k^3) comparisons for some constant C. The known lower bounds are of the form (k+1+c_k)n-D, for some constant D, where c_0=0.5, c_1=23/32=0.71875, and c_k=\Omega(2^{-5k/4}) as k goes to infinity.Comment: 11 pages, 3 figure

    Error-Correcting Tournaments

    Full text link
    We present a family of pairwise tournaments reducing kk-class classification to binary classification. These reductions are provably robust against a constant fraction of binary errors. The results improve on the PECOC construction \cite{SECOC} with an exponential improvement in computation, from O(k)O(k) to O(log2k)O(\log_2 k), and the removal of a square root in the regret dependence, matching the best possible computation and regret up to a constant.Comment: Minor wording improvement

    Microscopy as a statistical, Rényi-Ulam, half-lie game: a new heuristic search strategy to accelerate imaging

    Get PDF
    Finding a fluorescent target in a biological environment is a common and pressing microscopy problem. This task is formally analogous to the canonical search problem. In ideal (noise-free, truthful) search problems, the well-known binary search is optimal. The case of half-lies, where one of two responses to a search query may be deceptive, introduces a richer, Rényi-Ulam problem and is particularly relevant to practical microscopy. We analyse microscopy in the contexts of Rényi-Ulam games and half-lies, developing a new family of heuristics. We show the cost of insisting on verification by positive result in search algorithms; for the zero-half-lie case bisectioning with verification incurs a 50% penalty in the average number of queries required. The optimal partitioning of search spaces directly following verification in the presence of random half-lies is determined. Trisectioning with verification is shown to be the most efficient heuristic of the family in a majority of cases

    Error Detection in Number-Theoretic and Algebraic Algorithms

    Get PDF
    CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result)
    corecore