354 research outputs found

    Dynamic Ordered Sets with Approximate Queries, Approximate Heaps and Soft Heaps

    Get PDF
    We consider word RAM data structures for maintaining ordered sets of integers whose select and rank operations are allowed to return approximate results, i.e., ranks, or items whose rank, differ by less than Delta from the exact answer, where Delta=Delta(n) is an error parameter. Related to approximate select and rank is approximate (one-dimensional) nearest-neighbor. A special case of approximate select queries are approximate min queries. Data structures that support approximate min operations are known as approximate heaps (priority queues). Related to approximate heaps are soft heaps, which are approximate heaps with a different notion of approximation. We prove the optimality of all the data structures presented, either through matching cell-probe lower bounds, or through equivalences to well studied static problems. For approximate select, rank, and nearest-neighbor operations we get matching cell-probe lower bounds. We prove an equivalence between approximate min operations, i.e., approximate heaps, and the static partitioning problem. Finally, we prove an equivalence between soft heaps and the classical sorting problem, on a smaller number of items. Our results have many interesting and unexpected consequences. It turns out that approximation greatly speeds up some of these operations, while others are almost unaffected. In particular, while select and rank have identical operation times, both in comparison-based and word RAM implementations, an interesting separation emerges between the approximate versions of these operations in the word RAM model. Approximate select is much faster than approximate rank. It also turns out that approximate min is exponentially faster than the more general approximate select. Next, we show that implementing soft heaps is harder than implementing approximate heaps. The relation between them corresponds to the relation between sorting and partitioning. Finally, as an interesting byproduct, we observe that a combination of known techniques yields a deterministic word RAM algorithm for (exactly) sorting n items in O(n log log_w n) time, where w is the word length. Even for the easier problem of finding duplicates, the best previous deterministic bound was O(min{n log log n,n log_w n}). Our new unifying bound is an improvement when w is sufficiently large compared with n

    Wavelet Trees Meet Suffix Trees

    Full text link
    We present an improved wavelet tree construction algorithm and discuss its applications to a number of rank/select problems for integer keys and strings. Given a string of length n over an alphabet of size σn\sigma\leq n, our method builds the wavelet tree in O(nlogσ/logn)O(n \log \sigma/ \sqrt{\log{n}}) time, improving upon the state-of-the-art algorithm by a factor of logn\sqrt{\log n}. As a consequence, given an array of n integers we can construct in O(nlogn)O(n \sqrt{\log n}) time a data structure consisting of O(n)O(n) machine words and capable of answering rank/select queries for the subranges of the array in O(logn/loglogn)O(\log n / \log \log n) time. This is a loglogn\log \log n-factor improvement in query time compared to Chan and P\u{a}tra\c{s}cu and a logn\sqrt{\log n}-factor improvement in construction time compared to Brodal et al. Next, we switch to stringological context and propose a novel notion of wavelet suffix trees. For a string w of length n, this data structure occupies O(n)O(n) words, takes O(nlogn)O(n \sqrt{\log n}) time to construct, and simultaneously captures the combinatorial structure of substrings of w while enabling efficient top-down traversal and binary search. In particular, with a wavelet suffix tree we are able to answer in O(logx)O(\log |x|) time the following two natural analogues of rank/select queries for suffixes of substrings: for substrings x and y of w count the number of suffixes of x that are lexicographically smaller than y, and for a substring x of w and an integer k, find the k-th lexicographically smallest suffix of x. We further show that wavelet suffix trees allow to compute a run-length-encoded Burrows-Wheeler transform of a substring x of w in O(slogx)O(s \log |x|) time, where s denotes the length of the resulting run-length encoding. This answers a question by Cormode and Muthukrishnan, who considered an analogous problem for Lempel-Ziv compression.Comment: 33 pages, 5 figures; preliminary version published at SODA 201

    Deterministic sub-linear space LCE data structures with efficient construction

    Get PDF
    Given a string SS of nn symbols, a longest common extension query LCE(i,j)\mathsf{LCE}(i,j) asks for the length of the longest common prefix of the iith and jjth suffixes of SS. LCE queries have several important applications in string processing, perhaps most notably to suffix sorting. Recently, Bille et al. (J. Discrete Algorithms 25:42-50, 2014, Proc. CPM 2015: 65-76) described several data structures for answering LCE queries that offers a space-time trade-off between data structure size and query time. In particular, for a parameter 1τn1 \leq \tau \leq n, their best deterministic solution is a data structure of size O(n/τ)O(n/\tau) which allows LCE queries to be answered in O(τ)O(\tau) time. However, the construction time for all deterministic versions of their data structure is quadratic in nn. In this paper, we propose a deterministic solution that achieves a similar space-time trade-off of O(τmin{logτ,lognτ})O(\tau\min\{\log\tau,\log\frac{n}{\tau}\}) query time using O(n/τ)O(n/\tau) space, but significantly improve the construction time to O(nτ)O(n\tau).Comment: updated titl

    Parallel Wavelet Tree Construction

    Full text link
    We present parallel algorithms for wavelet tree construction with polylogarithmic depth, improving upon the linear depth of the recent parallel algorithms by Fuentes-Sepulveda et al. We experimentally show on a 40-core machine with two-way hyper-threading that we outperform the existing parallel algorithms by 1.3--5.6x and achieve up to 27x speedup over the sequential algorithm on a variety of real-world and artificial inputs. Our algorithms show good scalability with increasing thread count, input size and alphabet size. We also discuss extensions to variants of the standard wavelet tree.Comment: This is a longer version of the paper that appears in the Proceedings of the IEEE Data Compression Conference, 201

    Progress Report : 1991 - 1994

    Get PDF

    Fast deterministic processor allocation

    No full text
    Interval allocation has been suggested as a possible formalization for the PRAM of the (vaguely defined) processor allocation problem, which is of fundamental importance in parallel computing. The interval allocation problem is, given nn nonnegative integers x1,,xnx_1,\ldots,x_n, to allocate nn nonoverlapping subarrays of sizes x1,,xnx_1,\ldots,x_n from within a base array of O(j=1nxj)O(\sum_{j=1}^n x_j) cells. We show that interval allocation problems of size nn can be solved in O((loglogn)3)O((\log\log n)^3) time with optimal speedup on a deterministic CRCW PRAM. In addition to a general solution to the processor allocation problem, this implies an improved deterministic algorithm for the problem of approximate summation. For both interval allocation and approximate summation, the fastest previous deterministic algorithms have running times of Θ(logn/loglogn)\Theta({{\log n}/{\log\log n}}). We also describe an application to the problem of computing the connected components of an undirected graph

    Streaming and Small Space Approximation Algorithms for Edit Distance and Longest Common Subsequence

    Get PDF

    Perfectly Oblivious (Parallel) RAM Revisited, and Improved Constructions

    Get PDF
    Oblivious RAM (ORAM) is a technique for compiling any RAM program to an oblivious counterpart, i.e., one whose access patterns do not leak information about the secret inputs. Similarly, Oblivious Parallel RAM (OPRAM) compiles a {\it parallel} RAM program to an oblivious counterpart. In this paper, we care about ORAM/OPRAM with {\it perfect security}, i.e., the access patterns must be {\it identically distributed} no matter what the program\u27s memory request sequence is. In the past, two types of perfect ORAMs/OPRAMs have been considered: constructions whose performance bounds hold {\it in expectation} (but may occasionally run more slowly); and constructions whose performance bounds hold {\it deterministically} (even though the algorithms themselves are randomized). In this paper, we revisit the performance metrics for perfect ORAM/OPRAM, and show novel constructions that achieve asymptotical improvements for all performance metrics. Our first result is a new perfectly secure OPRAM scheme with O(log3N/loglogN)O(\log^3 N/\log \log N) {\it expected} overhead. In comparison, prior literature has been stuck at O(log3N)O(\log^3 N) for more than a decade. Next, we show how to construct a perfect ORAM with O(log3N/loglogN)O(\log^3 N/\log \log N) {\it deterministic} simulation overhead. We further show how to make the scheme parallel, resulting in an perfect OPRAM with O(log4N/loglogN)O(\log^4 N/\log \log N) {\it deterministic} simulation overhead. For perfect ORAMs/OPRAMs with deterministic performance bounds, our results achieve {\it subexponential} improvement over the state-of-the-art. Specifically, the best known prior scheme incurs more than N\sqrt{N} deterministic simulation overhead (Raskin and Simkin, Asiacrypt\u2719); moreover, their scheme works only for the sequential setting and is {\it not} amenable to parallelization. Finally, we additionally consider perfect ORAMs/OPRAMs whose performance bounds hold with high probability. For this new performance metric, we show new constructions whose simulation overhead is upper bounded by O(log3/loglogN)O(\log^3 /\log\log N) except with negligible in NN probability, i.e., we prove high-probability performance bounds that match the expected bounds mentioned earlier

    An information theoretic necessary condition for perfect reconstruction

    Full text link
    This article proposes a new information theoretic necessary condition for reconstructing a discrete random variable XX based on the knowledge of a set of discrete functions of XX. The reconstruction condition is derived from the Shannon's Lattice of Information (LoI) \cite{Shannon53} and two entropic metrics proposed respectively by Shannon and Rajski. This theoretical material being relatively unknown and/or dispersed in different references, we provide a complete and synthetic description of the LoI concepts like the total, common and complementary informations with complete proofs. The two entropic metrics definitions and properties are also fully detailled and showed compatible with the LoI structure. A new geometric interpretation of the Lattice structure is then investigated that leads to a new necessary condition for reconstructing the discrete random variable XX given a set {X0\{ X_0,...,Xn1}X_{n-1} \} of elements of the lattice generated by XX. Finally, this condition is derived in five specific examples of reconstruction of XX from a set of deterministic functions of XX: the reconstruction of a symmetric random variable from the knowledge of its sign and of its absolute value, the reconstruction of a binary word from a set of binary linear combinations, the reconstruction of an integer from its prime signature (Fundamental theorem of arithmetics) and from its reminders modulo a set of coprime integers (Chinese reminder theorem), and the reconstruction of the sorting permutation of a list from a set of 2-by-2 comparisons. In each case, the necessary condition is shown compatible with the corresponding well-known results.Comment: 17 pages, 9 figure
    corecore