573 research outputs found

    QuickHeapsort: Modifications and improved analysis

    Full text link
    We present a new analysis for QuickHeapsort splitting it into the analysis of the partition-phases and the analysis of the heap-phases. This enables us to consider samples of non-constant size for the pivot selection and leads to better theoretical bounds for the algorithm. Furthermore we introduce some modifications of QuickHeapsort, both in-place and using n extra bits. We show that on every input the expected number of comparisons is n lg n - 0.03n + o(n) (in-place) respectively n lg n -0.997 n+ o (n). Both estimates improve the previously known best results. (It is conjectured in Wegener93 that the in-place algorithm Bottom-Up-Heapsort uses at most n lg n + 0.4 n on average and for Weak-Heapsort which uses n extra-bits the average number of comparisons is at most n lg n -0.42n in EdelkampS02.) Moreover, our non-in-place variant can even compete with index based Heapsort variants (e.g. Rank-Heapsort in WangW07) and Relaxed-Weak-Heapsort (n lg n -0.9 n+ o (n) comparisons in the worst case) for which no O(n)-bound on the number of extra bits is known

    An In-Place Sorting with O(n log n) Comparisons and O(n) Moves

    Full text link
    We present the first in-place algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a long-standing open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374-93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously

    Worst-Case Efficient Sorting with QuickMergesort

    Full text link
    The two most prominent solutions for the sorting problem are Quicksort and Mergesort. While Quicksort is very fast on average, Mergesort additionally gives worst-case guarantees, but needs extra space for a linear number of elements. Worst-case efficient in-place sorting, however, remains a challenge: the standard solution, Heapsort, suffers from a bad cache behavior and is also not overly fast for in-cache instances. In this work we present median-of-medians QuickMergesort (MoMQuickMergesort), a new variant of QuickMergesort, which combines Quicksort with Mergesort allowing the latter to be implemented in place. Our new variant applies the median-of-medians algorithm for selecting pivots in order to circumvent the quadratic worst case. Indeed, we show that it uses at most nlogn+1.6nn \log n + 1.6n comparisons for nn large enough. We experimentally confirm the theoretical estimates and show that the new algorithm outperforms Heapsort by far and is only around 10% slower than Introsort (std::sort implementation of stdlibc++), which has a rather poor guarantee for the worst case. We also simulate the worst case, which is only around 10% slower than the average case. In particular, the new algorithm is a natural candidate to replace Heapsort as a worst-case stopper in Introsort

    The Cost of Address Translation

    Full text link
    Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and virtual memory. In this paper, we address the computational cost of address translation in virtual memory. Starting point for our work is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms.Comment: A extended abstract of this paper was published in the proceedings of ALENEX13, New Orleans, US

    Strengthened Lazy Heaps: Surpassing the Lower Bounds for Binary Heaps

    Full text link
    Let nn denote the number of elements currently in a data structure. An in-place heap is stored in the first nn locations of an array, uses O(1)O(1) extra space, and supports the operations: minimum, insert, and extract-min. We introduce an in-place heap, for which minimum and insert take O(1)O(1) worst-case time, and extract-min takes O(lgn)O(\lg{} n) worst-case time and involves at most lgn+O(1)\lg{} n + O(1) element comparisons. The achieved bounds are optimal to within additive constant terms for the number of element comparisons. In particular, these bounds for both insert and extract-min -and the time bound for insert- surpass the corresponding lower bounds known for binary heaps, though our data structure is similar. In a binary heap, when viewed as a nearly complete binary tree, every node other than the root obeys the heap property, i.e. the element at a node is not smaller than that at its parent. To surpass the lower bound for extract-min, we reinforce a stronger property at the bottom levels of the heap that the element at any right child is not smaller than that at its left sibling. To surpass the lower bound for insert, we buffer insertions and allow O(lg2n)O(\lg^2{} n) nodes to violate heap order in relation to their parents
    corecore