7 research outputs found

    Priority Queues on Parallel Machines

    No full text
    We present time and work optimal priority queues for the CREW PRAM, supporting FindMin in constant time with one processor and MakeQueue, Insert, Meld, FindMin, ExtractMin, Delete and DecreaseKey in constant time with O(log n) processors. A priority queue can be build in time O(log n) with O(n= log n) processors. A pipelined version of the priority queues adopt to a processor array of size O(log n), supporting the operations MakeQueue, Insert, Meld, FindMin, ExtractMin, Delete and DecreaseKey in constant time. By applying the k-bandwidth technique we get a data structure for the CREW PRAM which supports MultiInsert k operations in O(log k) time and MultiExtractMin k in O(log log k) time. Key words: Parallel priority queues, constant time operations, binomial trees, pipelined operations. 1 Introduction The construction of priority queues is a classical topic in data structures. Some references are [1,3,5,12-16,19,29,31-33]. A historical overview of implementations has been given by M..

    Priority Queues on Parallel Machines

    No full text
    . We present time and work optimal priority queues for the CREW PRAM, supporting FindMin in constant time with one processor and MakeQueue, Insert, Meld, FindMin, ExtractMin, Delete and DecreaseKey in constant time with O(log n) processors. A priority queue can be build in time O(log n) with O(n= log n) processors and k elements can be inserted into a priority queue in time O(log k) with O((log n+ k)= log k) processors. With a slowdown of O(log log n) in time the priority queues adopt to the EREW PRAM by only increasing the required work by a constant factor. A pipelined version of the priority queues adopt to a processor array of size O(log n), supporting the operations MakeQueue, Insert, Meld, FindMin, ExtractMin, Delete and DecreaseKey in constant time. 1 Introduction The construction of priority queues is a classical topic in data structures. Some references are [1, 2, 6, 7, 8, 9, 19, 20]. A historical overview of implementations can be found in [13]. Recently several papers ..

    Worst-Case Efficient Priority Queues

    No full text
    An implementation of priority queues is presented that supports the operations MakeQueue, FindMin, Insert, Meld and DecreaseKey in worst case time O(1) and DeleteMin and Delete in worst case time O(log n). The space requirement is linear. The data structure presented is the first achieving this worst case performance. 1 Introduction We consider the problem of implementing priority queues which are efficient in the worst case sense. The operations we want to support are the following commonly needed priority queue operations [11]. MakeQueue creates and returns an empty priority queue. FindMin(Q) returns the minimum element contained in priority queue Q. Insert(Q; e) inserts an element e into priority queue Q. Meld(Q 1 ; Q 2 ) melds priority queues Q 1 and Q 2 to a new priority queue and returns the resulting priority queue. DecreaseKey(Q; e; e 0 ) replaces element e by e 0 in priority queue Q provided e 0 e and it is known where e is stored in Q. DeleteMin(Q) deletes and..

    Worst-Case Efficient External-Memory Priority Queues

    No full text
    . A priority queue Q is a data structure that maintains a collection of elements, each element having an associated priority drawn from a totally ordered universe, under the operations Insert, which inserts an element into Q, and DeleteMin, which deletes an element with the minimum priority from Q. In this paper a priority-queue implementation is given which is efficient with respect to the number of block transfers or I/Os performed between the internal and external memories of a computer. Let B and M denote the respective capacity of a block and the internal memory measured in elements. The developed data structure handles any intermixed sequence of Insert and DeleteMin operations such that in every disjoint interval of B consecutive priorityqueue operations at most c log M=B N M I/Os are performed, for some positive constant c. These I/Os are divided evenly among the operations: if B c log M=B N M , one I/O is necessary for every B=(c log M=B N M )th operation ..

    The Randomized Complexity of Maintaining the Minimum

    Get PDF
    The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n=(e2 2t ) \Gamma 1 comparisons for FindMin. If FindMin is replaced by a weaker operation, FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given. 1 Introduction We consider the complexity of maintaining a set S of elements from a totally ordered universe under the following operations: Insert(e): inserts the element e into S, Delete(e): removes from S the element e provided it is known where e is stored, and FindMin: returns the minimum element in S without removing it. We refer to this problem as the Insert-Delete-FindMi..

    Approximate Dictionary Queries

    No full text
    . Given a set of n binary strings of length m each. We consider the problem of answering d--queries. Given a binary query string ff of length m, a d--query is to report if there exists a string in the set within Hamming distance d of ff. We present a data structure of size O(nm) supporting 1--queries in time O(m) and the reporting of all strings within Hamming distance 1 of ff in time O(m). The data structure can be constructed in time O(nm). A slightly modified version of the data structure supports the insertion of new strings in amortized time O(m). 1 Introduction Let W = fw 1 ; : : : ; wng be a set of n binary strings of length m each, i.e. w i 2 f0; 1g m . The set W is called the dictionary. We are interested in answering d-- queries, i.e. for any query string ff 2 f0; 1g m to decide if there is a string w i in W with at most Hamming distance d of ff. Minsky and Papert originally raised this problem in [12]. Recently a sequence of papers have considered how to solve thi..

    A Parallel Priority Data Structure with Applications

    No full text
    We present a parallel priority data structure that improves the running time of certain algorithms for problems that lack a fast and work-efficient parallel solution. As a main application, we give a parallel implementation of Dijkstra's algorithm which runs in O(n) time while performing O(m log n) work on a CREW PRAM. This is a logarithmic factor improvement for the running time compared with previous approaches. The main feature of our data structure is that the operations needed in each iteration of Dijkstra's algorithm can be supported in O(1) time. 1 Introduction Developing work-efficient parallel algorithms for graph and network optimization problems continues to be an important area of research in parallel computing. Despite much effort a number of basic problems have tenaciously resisted a very fast (i.e., NC) parallel solution that is simultaneously work-efficient. A notorious example is the single-source shortest path problem. The best sequential algorithm for the single-s..
    corecore