21,334 research outputs found

    On the average running time of odd-even merge sort

    Get PDF
    This paper is concerned with the average running time of Batcher's odd-even merge sort when implemented on a collection of processors. We consider the case where nn, the size of the input, is an arbitrary multiple of the number pp of processors used. We show that Batcher's odd-even merge (for two sorted lists of length nn each) can be implemented to run in time O((n/p)(log(2+p2/n)))O((n/p)(\log (2+p^2/n))) on the average, and that odd-even merge sort can be implemented to run in time O((n/p)(logn+logplog(2+p2/n)))O((n/p)(\log n+\log p\log (2+p^2/n))) on the average. In the case of merging (sorting), the average is taken over all possible outcomes of the merging (all possible permutations of nn elements). That means that odd-even merge and odd-even merge sort have an optimal average running time if np2n\geq p^2. The constants involved are also quite small

    A Parallel Distributed Strategy for Arraying a Scattered Robot Swarm

    Full text link
    We consider the problem of organizing a scattered group of nn robots in two-dimensional space, with geometric maximum distance DD between robots. The communication graph of the swarm is connected, but there is no central authority for organizing it. We want to arrange them into a sorted and equally-spaced array between the robots with lowest and highest label, while maintaining a connected communication network. In this paper, we describe a distributed method to accomplish these goals, without using central control, while also keeping time, travel distance and communication cost at a minimum. We proceed in a number of stages (leader election, initial path construction, subtree contraction, geometric straightening, and distributed sorting), none of which requires a central authority, but still accomplishes best possible parallelization. The overall arraying is performed in O(n)O(n) time, O(n2)O(n^2) individual messages, and O(nD)O(nD) travel distance. Implementation of the sorting and navigation use communication messages of fixed size, and are a practical solution for large populations of low-cost robots

    An Enhanced Multiway Sorting Network Based on n-Sorters

    Full text link
    Merging-based sorting networks are an important family of sorting networks. Most merge sorting networks are based on 2-way or multi-way merging algorithms using 2-sorters as basic building blocks. An alternative is to use n-sorters, instead of 2-sorters, as the basic building blocks so as to greatly reduce the number of sorters as well as the latency. Based on a modified Leighton's columnsort algorithm, an n-way merging algorithm, referred to as SS-Mk, that uses n-sorters as basic building blocks was proposed. In this work, we first propose a new multiway merging algorithm with n-sorters as basic building blocks that merges n sorted lists of m values each in 1 + ceil(m/2) stages (n <= m). Based on our merging algorithm, we also propose a sorting algorithm, which requires O(N log2 N) basic sorters to sort N inputs. While the asymptotic complexity (in terms of the required number of sorters) of our sorting algorithm is the same as the SS-Mk, for wide ranges of N, our algorithm requires fewer sorters than the SS-Mk. Finally, we consider a binary sorting network, where the basic sorter is implemented in threshold logic and scales linearly with the number of inputs, and compare the complexity in terms of the required number of gates. For wide ranges of N, our algorithm requires fewer gates than the SS-Mk.Comment: 13 pages, 14 figure

    Efficient Implementations of Molecular Dynamics Simulations for Lennard-Jones Systems

    Full text link
    Efficient implementations of the classical molecular dynamics (MD) method for Lennard-Jones particle systems are considered. Not only general algorithms but also techniques that are efficient for some specific CPU architectures are also explained. A simple spatial-decomposition-based strategy is adopted for parallelization. By utilizing the developed code, benchmark simulations are performed on a HITACHI SR16000/J2 system consisting of IBM POWER6 processors which are 4.7 GHz at the National Institute for Fusion Science (NIFS) and an SGI Altix ICE 8400EX system consisting of Intel Xeon processors which are 2.93 GHz at the Institute for Solid State Physics (ISSP), the University of Tokyo. The parallelization efficiency of the largest run, consisting of 4.1 billion particles with 8192 MPI processes, is about 73% relative to that of the smallest run with 128 MPI processes at NIFS, and it is about 66% relative to that of the smallest run with 4 MPI processes at ISSP. The factors causing the parallel overhead are investigated. It is found that fluctuations of the execution time of each process degrade the parallel efficiency. These fluctuations may be due to the interference of the operating system, which is known as OS Jitter.Comment: 33 pages, 19 figures, add references and figures are revise

    Improving the performance of bubble sort using a modified diminishing increment sorting

    Get PDF
    Sorting involves rearranging information into either ascending or descending order. There are many sorting algorithms, among which is Bubble Sort. Bubble Sort is not known to be a very good sorting algorithm because it is beset with redundant comparisons. However, efforts have been made to improve the performance of the algorithm. With Bidirectional Bubble Sort, the average number of comparisons is slightly reduced and Batcher’s Sort similar to Shellsort also performs significantly better than Bidirectional Bubble Sort by carrying out comparisons in a novel way so that no propagation of exchanges is necessary. Bitonic Sort was also presented by Batcher and the strong point of this sorting procedure is that it is very suitable for a hard-wired implementation using a sorting network. This paper presents a meta algorithm called Oyelami’s Sort that combines the technique of Bidirectional Bubble Sort with a modified diminishing increment sorting. The results from the implementation of the algorithm compared with Batcher’s Odd-Even Sort and Batcher’s Bitonic Sort showed that the algorithm performed better than the two in the worst case scenario. The implication is that the algorithm is faster

    Spin-the-bottle Sort and Annealing Sort: Oblivious Sorting via Round-robin Random Comparisons

    Full text link
    We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a \emph{temperature} parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require Ω(n2logn)\Omega(n^2\log n) expected time in order to succeed, and that in O(n2logn)O(n^2\log n) time this algorithm succeeds with high probability for any input. We also show there is an implementation of Annealing sort that runs in O(nlogn)O(n\log n) time and succeeds with very high probability.Comment: Full version of a paper appearing in ANALCO 2011, in conjunction with SODA 201
    corecore