310 research outputs found

    An In-Place Sorting with O(n log n) Comparisons and O(n) Moves

    Full text link
    We present the first in-place algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a long-standing open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374-93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously

    Evaluating Lossy Collections for Java Applications

    Get PDF
    We propose to remove live objects from near-full heaps to reduce memory pressure. We modify Java Collections to enable lossy behavior. Some DaCapo benchmarks tolerate an amount of live data loss

    Worst-Case Efficient Sorting with QuickMergesort

    Full text link
    The two most prominent solutions for the sorting problem are Quicksort and Mergesort. While Quicksort is very fast on average, Mergesort additionally gives worst-case guarantees, but needs extra space for a linear number of elements. Worst-case efficient in-place sorting, however, remains a challenge: the standard solution, Heapsort, suffers from a bad cache behavior and is also not overly fast for in-cache instances. In this work we present median-of-medians QuickMergesort (MoMQuickMergesort), a new variant of QuickMergesort, which combines Quicksort with Mergesort allowing the latter to be implemented in place. Our new variant applies the median-of-medians algorithm for selecting pivots in order to circumvent the quadratic worst case. Indeed, we show that it uses at most nlogn+1.6nn \log n + 1.6n comparisons for nn large enough. We experimentally confirm the theoretical estimates and show that the new algorithm outperforms Heapsort by far and is only around 10% slower than Introsort (std::sort implementation of stdlibc++), which has a rather poor guarantee for the worst case. We also simulate the worst case, which is only around 10% slower than the average case. In particular, the new algorithm is a natural candidate to replace Heapsort as a worst-case stopper in Introsort

    Optimum Partition Parameter of Divide-and-Conquer Algorithm for Solving Closest-Pair Problem

    Full text link
    Divide and Conquer is a well known algorithmic procedure for solving many kinds of problem. In this procedure, the problem is partitioned into two parts until the problem is trivially solvable. Finding the distance of the closest pair is an interesting topic in computer science. With divide and conquer algorithm we can solve closest pair problem. Here also the problem is partitioned into two parts until the problem is trivially solvable. But it is theoretically and practically observed that sometimes partitioning the problem space into more than two parts can give better performances. In this paper, a new proposal is given that dividing the problem space into (n) number of parts can give better result while divide and conquer algorithm is used for solving the closest pair of point's problem.Comment: arXiv admin note: substantial text overlap with arXiv:1010.590

    Heaps and heapsort on secondary storage

    Get PDF
    AbstractA heap structure designed for secondary storage is suggested that tries to make the best use of the available buffer space in primary memory. The heap is a complete multi-way tree, with multi-page blocks of records as nodes, satisfying a generalized heap property. A special feature of the tree is that the nodes may be partially filled, as in B-trees. The structure is complemented with priority-queue operations insert and delete-max. When handling a sequence of S operations, the number of page transfers performed is shown to be O(∑i = 1S(1P) log(MP)(NiP)), where P denotes the number of records fitting into a page, M the capacity of the buffer space in records, and Ni, the number of records in the heap prior to the ith operation (assuming P ⩾ 1 and S > M ⩾ c · P, where c is a small positive constant). The number of comparisons required when handling the sequence is O(∑i = 1S log2 Ni). Using the suggested data structure we obtain an optimal external heapsort that performs O((NP) log(MP)(NP)) page transfers and O(N log2 N) comparisons in the worst case when sorting N records

    Weak heaps and friends:recent developments

    Get PDF
    corecore