16 research outputs found

    The Fresh-Finger Property

    Full text link
    The unified property roughly states that searching for an element is fast when the current access is close to a recent access. Here, "close" refers to rank distance measured among all elements stored by the dictionary. We show that distance need not be measured this way: in fact, it is only necessary to consider a small working-set of elements to measure this rank distance. This results in a data structure with access time that is an improvement upon those offered by the unified property for many query sequences

    Top-Down Skiplists

    Full text link
    We describe todolists (top-down skiplists), a variant of skiplists (Pugh 1990) that can execute searches using at most log2εn+O(1)\log_{2-\varepsilon} n + O(1) binary comparisons per search and that have amortized update time O(ε1logn)O(\varepsilon^{-1}\log n). A variant of todolists, called working-todolists, can execute a search for any element xx using log2εw(x)+o(logw(x))\log_{2-\varepsilon} w(x) + o(\log w(x)) binary comparisons and have amortized search time O(ε1logw(w))O(\varepsilon^{-1}\log w(w)). Here, w(x)w(x) is the "working-set number" of xx. No previous data structure is known to achieve a bound better than 4log2w(x)4\log_2 w(x) comparisons. We show through experiments that, if implemented carefully, todolists are comparable to other common dictionary implementations in terms of insertion times and outperform them in terms of search times.Comment: 18 pages, 5 figure

    In pursuit of the dynamic optimality conjecture

    Full text link
    In 1985, Sleator and Tarjan introduced the splay tree, a self-adjusting binary search tree algorithm. Splay trees were conjectured to perform within a constant factor as any offline rotation-based search tree algorithm on every sufficiently long sequence---any binary search tree algorithm that has this property is said to be dynamically optimal. However, currently neither splay trees nor any other tree algorithm is known to be dynamically optimal. Here we survey the progress that has been made in the almost thirty years since the conjecture was first formulated, and present a binary search tree algorithm that is dynamically optimal if any binary search tree algorithm is dynamically optimal.Comment: Preliminary version of paper to appear in the Conference on Space Efficient Data Structures, Streams and Algorithms to be held in August 2013 in honor of Ian Munro's 66th birthda

    Priority Queues with Multiple Time Fingers

    Full text link
    A priority queue is presented that supports the operations insert and find-min in worst-case constant time, and delete and delete-min on element x in worst-case O(lg(min{w_x, q_x}+2)) time, where w_x (respectively q_x) is the number of elements inserted after x (respectively before x) and are still present at the time of the deletion of x. Our priority queue then has both the working-set and the queueish properties, and more strongly it satisfies these properties in the worst-case sense. We also define a new distribution-sensitive property---the time-finger property, which encapsulates and generalizes both the working-set and queueish properties, and present a priority queue that satisfies this property. In addition, we prove a strong implication that the working-set property is equivalent to the unified bound (which is the minimum per operation among the static finger, static optimality, and the working-set bounds). This latter result is of tremendous interest by itself as it had gone unnoticed since the introduction of such bounds by Sleater and Tarjan [JACM 1985]. Accordingly, our priority queue satisfies other distribution-sensitive properties as the static finger, static optimality, and the unified bound.Comment: 14 pages, 4 figure

    Cache-Oblivious Implicit Predecessor Dictionaries with the Working Set Property

    Get PDF
    In this paper we present an implicit dynamic dictionary with the working-set property, supporting insert(e) and delete(e) in O(log n) time, predecessor(e) in O(log l_{p(e)}) time, successor(e) in O(log l_{s(e)}) time and search(e) in O(log min(l_{p(e)},l_{e}, l_{s(e)})) time, where n is the number of elements stored in the dictionary, l_{e} is the number of distinct elements searched for since element e was last searched for and p(e) and s(e) are the predecessor and successor of e, respectively. The time-bounds are all worst-case. The dictionary stores the elements in an array of size n using no additional space. In the cache-oblivious model the log is base B and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit structures required O(log n) time.Comment: An extended abstract is accepted at STACS 2012, this is the full version of that paper with the same name "Cache-Oblivious Implicit Predecessor Dictionaries with the Working-Set Property", Symposium on Theoretical Aspects of Computer Science 201

    Belga B-trees

    Full text link
    We revisit self-adjusting external memory tree data structures, which combine the optimal (and practical) worst-case I/O performances of B-trees, while adapting to the online distribution of queries. Our approach is analogous to undergoing efforts in the BST model, where Tango Trees (Demaine et al. 2007) were shown to be O(loglogN)O(\log\log N)-competitive with the runtime of the best offline binary search tree on every sequence of searches. Here we formalize the B-Tree model as a natural generalization of the BST model. We prove lower bounds for the B-Tree model, and introduce a B-Tree model data structure, the Belga B-tree, that executes any sequence of searches within a O(loglogN)O(\log \log N) factor of the best offline B-tree model algorithm, provided B=logO(1)NB=\log^{O(1)}N. We also show how to transform any static BST into a static B-tree which is faster by a Θ(logB)\Theta(\log B) factor; the transformation is randomized and we show that randomization is necessary to obtain any significant speedup

    The Log-Interleave Bound: Towards the Unification of Sorting and the BST Model

    Full text link
    We study the connections between sorting and the binary search tree model, with an aim towards showing that the fields are connected more deeply than is currently known. The main vehicle of our study is the log-interleave bound, a measure of the information-theoretic complexity of a permutation π\pi. When viewed through the lens of adaptive sorting -- the study of lists which are nearly sorted according to some measure of disorder -- the log-interleave bound is comparable to the most powerful known measure of disorder. Many of these measures of disorder are themselves virtually identical to well-known upper bounds in the BST model, such as the working set bound or the dynamic finger bound, suggesting a connection between BSTs and sorting. We present three results about the log-interleave bound which solidify the aforementioned connections. The first is a proof that the log-interleave bound is always within a lglgn\lg \lg n multiplicative factor of a known lower bound in the BST model, meaning that an online BST algorithm matching the log-interleave bound would perform within the same bounds as the state-of-the-art lglgn\lg \lg n-competitive BST. The second result is an offline algorithm in the BST model which uses O(LIB(π))O(\text{LIB}(\pi)) accesses to search for any permutation π\pi. The technique used to design this algorithm also serves as a general way to show whether a sorting algorithm can be transformed into an offline BST algorithm. The final result is a mergesort algorithm which performs work within the log-interleave bound of a permutation π\pi. This mergesort also happens to be highly parallel, adding to a line of work in parallel BST operations
    corecore