12 research outputs found

    Insertion Sort is O(n log n)

    Full text link
    Traditional Insertion Sort runs in O(n^2) time because each insertion takes O(n) time. When people run Insertion Sort in the physical world, they leave gaps between items to accelerate insertions. Gaps help in computers as well. This paper shows that Gapped Insertion Sort has insertion times of O(log n) with high probability, yielding a total running time of O(n log n) with high probability.Comment: 6 pages, Latex. In Proceedings of the Third International Conference on Fun With Algorithms, FUN 200

    Enhancements In Sorting Algorithms: A Review

    Get PDF
    One of the important issues in designing algorithms is to arrange a list of items in particular order. Aalthough there is a large number of sorting algorithms, sorting problem has concerned a great compact of research, because efficient sorting is important to optimize the use of other algorithms. In many applications, sorting plays an important role as to easily handling of the data by arranging it in ascending or descending order.[2] In this paper, we are presenting enhancements in various sorting algorithms such as bubble sort, insertion sort, selection sort, and merge sort. Aa sorting algorithm consists of comparison, swap, and the use of assignment operations. Bubble sort, selection sort and insertion sort are algorithms, which are easy to understand but have the worst time complexity of O (n2). The new algorithms are discussed, analyzed, tested, and executed for reference. Eenhanced selection sort is based on sorting the items by making it slightly faster and stable sorting algorithm. Mmodified bubble sort is an modification on both bubble sort and selection sort algorithms with O (n log n) complexity instead of O (n2) for bubble sort and selection sort algorithms. [3] [1]

    In search of the fastest sorting algorithm

    Get PDF
    This paper explores in a chronological way the concepts, structures, and algorithms that programmers and computer scientists have tried out in their attempt to produce and improve the process of sorting. The measure of ‘fastness’ in this paper is mainly given in terms of the Big O notation.peer-reviewe

    Online List Labeling with Predictions

    Full text link
    A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case

    Packed Memory Arrays – Rewired

    Get PDF
    The physical memory layout of a tree-based index structure deteriorates over time as it sustains more updates; such that sequential scans on the physical level become non-sequential, and therefore slower. Packed Memory Arrays (PMAs) prevent this by managing all data in a sequential sparse array. PMAs have been studied mostly theoretically but suffer from practical problems, as we show in this paper. We study and fix these problems, resulting in an improved data structure: the Rewired Memory Array (RMA). We compare RMA with the main previous PMA implementations as well as state-of-the-art tree index structures and show on a wide variety of data and query distributions that RMA can reach competitive update and point lookup performance, while always providing superior scan performance – close to dense column scans

    A Packed Memory Array to Keep Moving Particles Sorted

    Get PDF
    International audienceNeighbor identification is the most computationally intensive step in particle based simulations. To contain its cost, a common approach consists in using a regular grid to sort particles according to the cell they belong to. Then, neighbor search only needs to test the particles contained in a constant number of cells. During the simulation, a usually small amount of particles are moving between consecutive steps. Taking into account this temporal coherency to save on the maintenance cost of the acceleration data structure is difficult as it usually triggers costly dynamics memory allocations or data moves. In this paper we propose to rely on a Packed Memory Array (PMA) to efficiently keep particles sorted according to their cell index. The PMA maintains gaps in the particle array that enable to keep particle sorted with O(log2(n)) amortized data moves. We further improve the original PMA data structure to support efficient batch data moves. Experiments show that the PMA can outperform a compact sorted array for up to 50% element moves
    corecore