45,300 research outputs found

    GPU LSM: A Dynamic Dictionary Data Structure for the GPU

    Full text link
    We develop a dynamic dictionary data structure for the GPU, supporting fast insertions and deletions, based on the Log Structured Merge tree (LSM). Our implementation on an NVIDIA K40c GPU has an average update (insertion or deletion) rate of 225 M elements/s, 13.5x faster than merging items into a sorted array. The GPU LSM supports the retrieval operations of lookup, count, and range query operations with an average rate of 75 M, 32 M and 23 M queries/s respectively. The trade-off for the dynamic updates is that the sorted array is almost twice as fast on retrievals. We believe that our GPU LSM is the first dynamic general-purpose dictionary data structure for the GPU.Comment: 11 pages, accepted to appear on the Proceedings of IEEE International Parallel and Distributed Processing Symposium (IPDPS'18

    A Bulk-Parallel Priority Queue in External Memory with STXXL

    Get PDF
    We propose the design and an implementation of a bulk-parallel external memory priority queue to take advantage of both shared-memory parallelism and high external memory transfer speeds to parallel disks. To achieve higher performance by decoupling item insertions and extractions, we offer two parallelization interfaces: one using "bulk" sequences, the other by defining "limit" items. In the design, we discuss how to parallelize insertions using multiple heaps, and how to calculate a dynamic prediction sequence to prefetch blocks and apply parallel multiway merge for extraction. Our experimental results show that in the selected benchmarks the priority queue reaches 75% of the full parallel I/O bandwidth of rotational disks and and 65% of SSDs, or the speed of sorting in external memory when bounded by computation.Comment: extended version of SEA'15 conference pape

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Efficient Management of Short-Lived Data

    Full text link
    Motivated by the increasing prominence of loosely-coupled systems, such as mobile and sensor networks, which are characterised by intermittent connectivity and volatile data, we study the tagging of data with so-called expiration times. More specifically, when data are inserted into a database, they may be tagged with time values indicating when they expire, i.e., when they are regarded as stale or invalid and thus are no longer considered part of the database. In a number of applications, expiration times are known and can be assigned at insertion time. We present data structures and algorithms for online management of data tagged with expiration times. The algorithms are based on fully functional, persistent treaps, which are a combination of binary search trees with respect to a primary attribute and heaps with respect to a secondary attribute. The primary attribute implements primary keys, and the secondary attribute stores expiration times in a minimum heap, thus keeping a priority queue of tuples to expire. A detailed and comprehensive experimental study demonstrates the well-behavedness and scalability of the approach as well as its efficiency with respect to a number of competitors.Comment: switched to TimeCenter latex styl

    Using Hashing to Solve the Dictionary Problem (In External Memory)

    Full text link
    We consider the dictionary problem in external memory and improve the update time of the well-known buffer tree by roughly a logarithmic factor. For any \lambda >= max {lg lg n, log_{M/B} (n/B)}, we can support updates in time O(\lambda / B) and queries in sublogarithmic time, O(log_\lambda n). We also present a lower bound in the cell-probe model showing that our data structure is optimal. In the RAM, hash tables have been used to solve the dictionary problem faster than binary search for more than half a century. By contrast, our data structure is the first to beat the comparison barrier in external memory. Ours is also the first data structure to depart convincingly from the indivisibility paradigm

    Perspects in astrophysical databases

    Full text link
    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large datasets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large datasets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions

    Fine-Grained Complexity Analysis of Two Classic TSP Variants

    Get PDF
    We analyze two classic variants of the Traveling Salesman Problem using the toolkit of fine-grained complexity. Our first set of results is motivated by the Bitonic TSP problem: given a set of nn points in the plane, compute a shortest tour consisting of two monotone chains. It is a classic dynamic-programming exercise to solve this problem in O(n2)O(n^2) time. While the near-quadratic dependency of similar dynamic programs for Longest Common Subsequence and Discrete Frechet Distance has recently been proven to be essentially optimal under the Strong Exponential Time Hypothesis, we show that bitonic tours can be found in subquadratic time. More precisely, we present an algorithm that solves bitonic TSP in O(nlog2n)O(n \log^2 n) time and its bottleneck version in O(nlog3n)O(n \log^3 n) time. Our second set of results concerns the popular kk-OPT heuristic for TSP in the graph setting. More precisely, we study the kk-OPT decision problem, which asks whether a given tour can be improved by a kk-OPT move that replaces kk edges in the tour by kk new edges. A simple algorithm solves kk-OPT in O(nk)O(n^k) time for fixed kk. For 2-OPT, this is easily seen to be optimal. For k=3k=3 we prove that an algorithm with a runtime of the form O~(n3ϵ)\tilde{O}(n^{3-\epsilon}) exists if and only if All-Pairs Shortest Paths in weighted digraphs has such an algorithm. The results for k=2,3k=2,3 may suggest that the actual time complexity of kk-OPT is Θ(nk)\Theta(n^k). We show that this is not the case, by presenting an algorithm that finds the best kk-move in O(n2k/3+1)O(n^{\lfloor 2k/3 \rfloor + 1}) time for fixed k3k \geq 3. This implies that 4-OPT can be solved in O(n3)O(n^3) time, matching the best-known algorithm for 3-OPT. Finally, we show how to beat the quadratic barrier for k=2k=2 in two important settings, namely for points in the plane and when we want to solve 2-OPT repeatedly.Comment: Extended abstract appears in the Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016
    corecore