27 research outputs found

    Fast Parallel Algorithms for Basic Problems

    Get PDF
    Parallel processing is one of the most active research areas these days. We are interested in one aspect of parallel processing, i.e. the design and analysis of parallel algorithms. Here, we focus on non-numerical parallel algorithms for basic combinatorial problems, such as data structures, selection, searching, merging and sorting. The purposes of studying these types of problems are to obtain basic building blocks which will be useful in solving complex problems, and to develop fundamental algorithmic techniques. In this thesis, we study the following problems: priority queues, multiple search and multiple selection, and reconstruction of a binary tree from its traversals. The research on priority queue was motivated by its various applications. The purpose of studying multiple search and multiple selection is to explore the relationships between four of the most fundamental problems in algorithm design, that is, selection, searching, merging and sorting; while our parallel solutions can be used as subroutines in algorithms for other problems. The research on the last problem, reconstruction of a binary tree from its traversals, was stimulated by a challenge proposed in a recent paper by Berkman et al. ( Highly Parallelizable Problems, STOC 89) to design doubly logarithmic time optimal parallel algorithms because a remarkably small number of such parallel algorithms exist

    Implementation of operations in double-ended heaps

    Get PDF
    Existuje viacero spôsobov ako vytvoriť dvojkoncovú haldu z dvoch klasických háld. V tejto práci rozšírime dvojkoncovú haldu založenú na prepojení listov a vytvoríme novú schému nazvanú L-korešpondencia. Táto schéma rozšíri triedu možných klasických háld použiteľných pre vytvorenie dvojkoncovej haldy (napr. Fibonacci halda, Rank-pairing halda). Ďalej umožní operácie ``Zníž prioritu'' a ``Zvýš prioritu''. Tento prístup ukážeme na troch konkrétnych haldách a odhadneme časovú zložitosť pre všetky operácie. Ďalším výsledkom je, že pre tieto tri konkrétne haldy, očakávaný čas operácií ``Zníž prioritu'' a ``Zvýš prioritu'' je obmedzený konštantou.There are several approaches for creating double-ended heaps from the single-ended heaps. We build on one of them, the leaf correspondence heap, to create a generic double ended heap scheme called L-correspondence heap. This will broaden the class of eligible base single-ended heaps (e.g. by Fibonacci heap, Rank-pairing heap) and make the operations Decrease and Increase possible. We show this approach on specific examples for three different single-ended base heaps and give time complexity bounds for all operations. Another result is that for these three examples, the expected amortized time for Decrease and Increase operations in the L-correspondence heap is bounded by a constant.Department of Theoretical Computer Science and Mathematical LogicKatedra teoretické informatiky a matematické logikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult

    Efficient Algorithms and Data Structures for Massive Data Sets

    Full text link
    For many algorithmic problems, traditional algorithms that optimise on the number of instructions executed prove expensive on I/Os. Novel and very different design techniques, when applied to these problems, can produce algorithms that are I/O efficient. This thesis adds to the growing chorus of such results. The computational models we use are the external memory model and the W-Stream model. On the external memory model, we obtain the following results. (1) An I/O efficient algorithm for computing minimum spanning trees of graphs that improves on the performance of the best known algorithm. (2) The first external memory version of soft heap, an approximate meldable priority queue. (3) Hard heap, the first meldable external memory priority queue that matches the amortised I/O performance of the known external memory priority queues, while allowing a meld operation at the same amortised cost. (4) I/O efficient exact, approximate and randomised algorithms for the minimum cut problem, which has not been explored before on the external memory model. (5) Some lower and upper bounds on I/Os for interval graphs. On the W-Stream model, we obtain the following results. (1) Algorithms for various tree problems and list ranking that match the performance of the best known algorithms and are easier to implement than them. (2) Pass efficient algorithms for sorting, and the maximal independent set problems, that improve on the best known algorithms. (3) Pass efficient algorithms for the graphs problems of finding vertex-colouring, approximate single source shortest paths, maximal matching, and approximate weighted vertex cover. (4) Lower bounds on passes for list ranking and maximal matching. We propose two variants of the W-Stream model, and design algorithms for the maximal independent set, vertex-colouring, and planar graph single source shortest paths problems on those models.Comment: PhD Thesis (144 pages

    Automated Expected Amortised Cost Analysis of Probabilistic Data Structures

    Get PDF
    In this paper, we present the first fully-automated expected amortised cost analysis of self-adjusting data structures, that is, of randomised splay trees, randomised splay heaps and randomised meldable heaps, which so far have only (semi-) manually been analysed in the literature. Our analysis is stated as a type-and-effect system for a first-order functional programming language with support for sampling over discrete distributions, non-deterministic choice and a ticking operator. The latter allows for the specification of fine-grained cost models. We state two soundness theorems based on two different -- but strongly related -- typing rules of ticking, which account differently for the cost of non-terminating computations. Finally we provide a prototype implementation able to fully automatically analyse the aforementioned case studies.Comment: 39 page

    Hollow Heaps

    Full text link
    We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except delete and delete-min take O(1)O(1) time, worst case as well as amortized; delete and delete-min take O(logn)O(\log n) amortized time on a heap of nn items. Hollow heaps are by far the simplest structure to achieve this. Hollow heaps combine two novel ideas: the use of lazy deletion and re-insertion to do decrease-key operations, and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name.Comment: 27 pages, 7 figures, preliminary version appeared in ICALP 201

    10091 Abstracts Collection -- Data Structures

    Get PDF
    From February 28th to March 5th 2010, the Dagstuhl Seminar 10091 "Data Structures" was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. It brought together 45 international researchers to discuss recent developments concerning data structures in terms of research, but also in terms of new technologies that impact how data can be stored, updated, and retrieved. During the seminar a fair number of participants presented their current research and open problems where discussed. This document first briefly describes the seminar topics and then gives the abstracts of the presentations given during the seminar
    corecore