85,189 research outputs found

    Minimum and maximum against k lies

    Full text link
    A neat 1972 result of Pohl asserts that [3n/2]-2 comparisons are sufficient, and also necessary in the worst case, for finding both the minimum and the maximum of an n-element totally ordered set. The set is accessed via an oracle for pairwise comparisons. More recently, the problem has been studied in the context of the Renyi-Ulam liar games, where the oracle may give up to k false answers. For large k, an upper bound due to Aigner shows that (k+O(\sqrt{k}))n comparisons suffice. We improve on this by providing an algorithm with at most (k+1+C)n+O(k^3) comparisons for some constant C. The known lower bounds are of the form (k+1+c_k)n-D, for some constant D, where c_0=0.5, c_1=23/32=0.71875, and c_k=\Omega(2^{-5k/4}) as k goes to infinity.Comment: 11 pages, 3 figure

    On smoothed analysis of quicksort and Hoare's find

    Get PDF
    We provide a smoothed analysis of Hoare's find algorithm, and we revisit the smoothed analysis of quicksort. Hoare's find algorithm - often called quickselect or one-sided quicksort - is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoareā€™s find needs is Theta(n^2), the average-case number is Theta(n). We analyze what happens between these two extremes by providing a smoothed analysis. In the first perturbation model, an adversary specifies a sequence of n numbers of [0,1], and then, to each number of the sequence, we add a random number drawn independently from the interval [0,d]. We prove that Hoare's find needs Theta(n/(d+1) sqrt(n/d) + n) comparisons in expectation if the adversary may also specify the target element (even after seeing the perturbed sequence) and slightly fewer comparisons for finding the median. In the second perturbation model, each element is marked with a probability of p, and then a random permutation is applied to the marked elements. We prove that the expected number of comparisons to find the median is Omega((1āˆ’p)n/p log n). Finally, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoareā€™s find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule

    Optimal lower bounds for universal relation, and for samplers and finding duplicates in streams

    Full text link
    In the communication problem UR\mathbf{UR} (universal relation) [KRW95], Alice and Bob respectively receive x,yāˆˆ{0,1}nx, y \in\{0,1\}^n with the promise that xā‰ yx\neq y. The last player to receive a message must output an index ii such that xiā‰ yix_i\neq y_i. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly Ī˜(minā”{n,logā”(1/Ī“)logā”2(nlogā”(1/Ī“))})\Theta(\min\{n,\log(1/\delta)\log^2(\frac n{\log(1/\delta)})\}) for failure probability Ī“\delta. Our lower bound holds even if promised support(y)āŠ‚support(x)\mathop{support}(y)\subset \mathop{support}(x). As a corollary, we obtain optimal lower bounds for ā„“p\ell_p-sampling in strict turnstile streams for 0ā‰¤p<20\le p < 2, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if promised xāˆˆ{0,1}nx\in\{0,1\}^n at all points in the stream. We give two different proofs of our main result. The first proof demonstrates that any algorithm A\mathcal A solving sampling problems in turnstile streams in low memory can be used to encode subsets of [n][n] of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to A\mathcal A throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoder's interactions with A\mathcal A, which is loosely motivated by techniques in differential privacy. Our second proof is via a novel randomized reduction from Augmented Indexing [MNSW98] which needs to interact with A\mathcal A adaptively. To handle the adaptivity we identify certain likely interaction patterns and union bound over them to guarantee correct interaction on all of them. To guarantee correctness, it is important that the interaction hides some of its randomness from A\mathcal A in the reduction.Comment: merge of arXiv:1703.08139 and of work of Kapralov, Woodruff, and Yahyazade

    Memory-Adjustable Navigation Piles with Applications to Sorting and Convex Hulls

    Get PDF
    We consider space-bounded computations on a random-access machine (RAM) where the input is given on a read-only random-access medium, the output is to be produced to a write-only sequential-access medium, and the available workspace allows random reads and writes but is of limited capacity. The length of the input is NN elements, the length of the output is limited by the computation, and the capacity of the workspace is O(S)O(S) bits for some predetermined parameter SS. We present a state-of-the-art priority queue---called an adjustable navigation pile---for this restricted RAM model. Under some reasonable assumptions, our priority queue supports minimum\mathit{minimum} and insert\mathit{insert} in O(1)O(1) worst-case time and extract\mathit{extract} in O(N/S+lgā”S)O(N/S + \lg{} S) worst-case time for any Sā‰„lgā”NS \geq \lg{} N. We show how to use this data structure to sort NN elements and to compute the convex hull of NN points in the two-dimensional Euclidean space in O(N2/S+Nlgā”S)O(N^2/S + N \lg{} S) worst-case time for any Sā‰„lgā”NS \geq \lg{} N. Following a known lower bound for the space-time product of any branching program for finding unique elements, both our sorting and convex-hull algorithms are optimal. The adjustable navigation pile has turned out to be useful when designing other space-efficient algorithms, and we expect that it will find its way to yet other applications.Comment: 21 page

    New results about multi-band uncertainty in Robust Optimization

    Full text link
    "The Price of Robustness" by Bertsimas and Sim represented a breakthrough in the development of a tractable robust counterpart of Linear Programming Problems. However, the central modeling assumption that the deviation band of each uncertain parameter is single may be too limitative in practice: experience indeed suggests that the deviations distribute also internally to the single band, so that getting a higher resolution by partitioning the band into multiple sub-bands seems advisable. The critical aim of our work is to close the knowledge gap about the adoption of a multi-band uncertainty set in Robust Optimization: a general definition and intensive theoretical study of a multi-band model are actually still missing. Our new developments have been also strongly inspired and encouraged by our industrial partners, which have been interested in getting a better modeling of arbitrary distributions, built on historical data of the uncertainty affecting the considered real-world problems. In this paper, we study the robust counterpart of a Linear Programming Problem with uncertain coefficient matrix, when a multi-band uncertainty set is considered. We first show that the robust counterpart corresponds to a compact LP formulation. Then we investigate the problem of separating cuts imposing robustness and we show that the separation can be efficiently operated by solving a min-cost flow problem. Finally, we test the performance of our new approach to Robust Optimization on realistic instances of a Wireless Network Design Problem subject to uncertainty.Comment: 15 pages. The present paper is a revised version of the one appeared in the Proceedings of SEA 201
    • ā€¦
    corecore