4 research outputs found

    An Efficient Multiway Mergesort for GPU Architectures

    Full text link
    Sorting is a primitive operation that is a building block for countless algorithms. As such, it is important to design sorting algorithms that approach peak performance on a range of hardware architectures. Graphics Processing Units (GPUs) are particularly attractive architectures as they provides massive parallelism and computing power. However, the intricacies of their compute and memory hierarchies make designing GPU-efficient algorithms challenging. In this work we present GPU Multiway Mergesort (MMS), a new GPU-efficient multiway mergesort algorithm. MMS employs a new partitioning technique that exposes the parallelism needed by modern GPU architectures. To the best of our knowledge, MMS is the first sorting algorithm for the GPU that is asymptotically optimal in terms of global memory accesses and that is completely free of shared memory bank conflicts. We realize an initial implementation of MMS, evaluate its performance on three modern GPU architectures, and compare it to competitive implementations available in state-of-the-art GPU libraries. Despite these implementations being highly optimized, MMS compares favorably, achieving performance improvements for most random inputs. Furthermore, unlike MMS, state-of-the-art algorithms are susceptible to bank conflicts. We find that for certain inputs that cause these algorithms to incur large numbers of bank conflicts, MMS can achieve up to a 37.6% speedup over its fastest competitor. Overall, even though its current implementation is not fully optimized, due to its efficient use of the memory hierarchy, MMS outperforms the fastest comparison-based sorting implementations available to date

    A full parallel Quicksort algorithm for multicore processors

    Get PDF
    The problem addressed in this paper is that we want to sort an integer array a[] of length n in parallel on a multi core machine with p cores using Quicksort. Amdahl’s law tells us that the inherent sequential part of any algorithm will in the end dominate and limit the speedup we get from parallelisation. This paper introduces ParaQuick, a full parallel quicksort algorithm for use on an ordinary shared memory multi core machine that has just a few simple statements in its sequential part. It can be seen as an improvement over traditional parallelization of the Quicksort algorithm, where one follows the sequential algorithm and substitute recursive calls with the creation of parallel threads for these calls in the top of the recursion tree. The ParaQuick algorithm, starts with k parallel threads, where k is a multiple of p (here k = 8*p) in a k way partition of the original array with the same pivot value, and hence we get 2k partitioned areas in the first pass. We then calculate where the pivot index, the division between the small and large elements if this had been ordinary sequential Quicksort partition. In full parallel we then swap all small elements to the right of this pivot index with the large elements to the left of this pivot index – these two ‘displaced’ sets are by definition of equal size. We can then recursively with half of the threads now do the left part, and with the other half of the threads the right part (more details and synchronization considerations in the paper). Finally, when there is only one thread left working on one such area, sequential Quicksort and Insertionsort are used, as in the traditional way of doing parallel Quicksort. In the last part of the paper, this new algorithm is empirically tested against two other algorithms and Arrays.sort from the Java library. Five different distributions of the numbers to be sorted end three different machines with p = 2(4 hyper threaded), 4(8) and 32(64) are tested. Finally, conclusions are presented and an explanation is given why this ParaQuick algorithm for large values of n and some distributions is so much faster than a traditional parallelisation

    A faster all parallel Mergesort algorithm for multicore processors

    Get PDF
    The problem addressed in this paper is that we want to sort an integer array a[] of length n in parallel on a multi core machine with p cores using mergesort. Amdahl’s law tells us that the inherent sequential part of any algorithm will in the end dominate and limit the speedup we get from parallelisation. This paper introduces ParaMerge, an all parallel mergesort algorithm for use on an ordinary shared memory multi core machine that has just a few simple statements in its sequential part. The new algorithm is all parallel in the sense that by recursive decent it is two parallel in the top node, four parallel on the next level in the recursion, then eight parallel until we at least have started one thread for all the p cores. After parallelization, each thread then uses sequential recursion mergesort with a variant of insertion sort for sorting short subsections at the end. ParaMerge can be seen as an improvement over traditional parallelization of the mergesort algorithm where one follows the sequential algorithm and substitute recursive calls with the creation of parallel threads in the top of the recursion tree. This traditional parallel mergesort finally does a sequential merging of the two sorted halves of a[]. First at the next level it goes two-parallel, then four parallel on the next level, and so on. After parallelization my implementation of this traditional algorithm also use the same sequential mergesort and insertion sort algorithm as the ParaMerge algorithm in each thread. There are two main improvements in Paramerge: First the observation that merging can both be done from the start left to right picking the smallest elements of the two sections to be merged, and at the same time from the end of the same sections from right to left picking the largest elements. The second improvement is that the contract between a node and its two sub-nodes is changed. In a traditional parallelization a node is given a section of a[], sort this by merging two sorted halves it recursively receives from its own two sub nodes and returns its to its mother node. In Paramerge the two sub nodes each receive a full sorting from its two own sub nodes of the section itself got from its mother node (so this problem is already solved). Every node has a twin node. In parallel these two twin nodes then merge their two sorted sections, one from left and the other from right as described above. The two twin sub nodes have then sorted the whole section given to their common mother node. This goes also for the top node. We have thus raised the level of parallelization by a factor of two at each level of the top of the recursion tree. The ParaMerge algorithm also contains other improvements, such as a controlled sorting back and forth between a[] and a scratch area b[] of the same size such that the sorted result always ends up in a[] without any copy, and a special insertion sort that is central for achieving this copy-free feature. ParaMerge is compared with other published algorithms, and in only one case is one of the ‘new’ features in Paramerge found. This other algorithm is described and compared in some detail.Finally, ParaMerge is empirically compared with three other algorithms sorting arrays of length n =10,20,…,50 m, and ..1000m when p=32. demonstrating that it is significantly faster than two other merge algorithms, the sequential and the traditional parallel algorithm, and Arrays.sort(), a sequential Quicksort algorithm from the Java library

    Sorting networks using k-comparators

    Get PDF
    Bibliography: leaves 160-167
    corecore