91,343 research outputs found

    Dynamic Trees with Almost-Optimal Access Cost

    Get PDF
    An optimal binary search tree for an access sequence on elements is a static tree that minimizes the total search cost. Constructing perfectly optimal binary search trees is expensive so the most efficient algorithms construct almost optimal search trees. There exists a long literature of constructing almost optimal search trees dynamically, i.e., when the access pattern is not known in advance. All of these trees, e.g., splay trees and treaps, provide a multiplicative approximation to the optimal search cost. In this paper we show how to maintain an almost optimal weighted binary search tree under access operations and insertions of new elements where the approximation is an additive constant. More technically, we maintain a tree in which the depth of the leaf holding an element e_i does not exceed min(log(W/w_i),log n)+O(1) where w_i is the number of times e_i was accessed and W is the total length of the access sequence. Our techniques can also be used to encode a sequence of m symbols with a dynamic alphabetic code in O(m) time so that the encoding length is bounded by m(H+O(1)), where H is the entropy of the sequence. This is the first efficient algorithm for adaptive alphabetic coding that runs in constant time per symbol

    Arbitrary weight changes in dynamic trees

    Get PDF
    We describe an implementation of dynamic weighted trees, called D-trees. Given a set left{ B_{0},...,B_{n}right} of objects and access frequencies q_{0},q_{1},...,q_{n} one wants to store the objects in a binary tree such that average access is nearly optimal and changes of the access frequencies require only small changes of the tree. In D-trees the changes are always limited to the path of search and hence update time is at most proportional to search time

    What Does Dynamic Optimality Mean in External Memory?

    Get PDF
    A data structure A is said to be dynamically optimal over a class of data structures ? if A is constant-competitive with every data structure C ? ?. Much of the research on binary search trees in the past forty years has focused on studying dynamic optimality over the class of binary search trees that are modified via rotations (and indeed, the question of whether splay trees are dynamically optimal has gained notoriety as the so-called dynamic-optimality conjecture). Recently, researchers have extended this to consider dynamic optimality over certain classes of external-memory search trees. In particular, Demaine, Iacono, Koumoutsos, and Langerman propose a class of external-memory trees that support a notion of tree rotations, and then give an elegant data structure, called the Belga B-tree, that is within an O(log log N)-factor of being dynamically optimal over this class. In this paper, we revisit the question of how dynamic optimality should be defined in external memory. A defining characteristic of external-memory data structures is that there is a stark asymmetry between queries and inserts/updates/deletes: by making the former slightly asymptotically slower, one can make the latter significantly asymptotically faster (even allowing for operations with sub-constant amortized I/Os). This asymmetry makes it so that rotation-based search trees are not optimal (or even close to optimal) in insert/update/delete-heavy external-memory workloads. To study dynamic optimality for such workloads, one must consider a different class of data structures. The natural class of data structures to consider are what we call buffered-propagation trees. Such trees can adapt dynamically to the locality properties of an input sequence in order to optimize the interactions between different inserts/updates/deletes and queries. We also present a new form of beyond-worst-case analysis that allows for us to formally study a continuum between static and dynamic optimality. Finally, we give a novel data structure, called the J?llo Tree, that is statically optimal and that achieves dynamic optimality for a large natural class of inputs defined by our beyond-worst-case analysis

    New Paths from Splay to Dynamic Optimality

    Full text link
    Consider the task of performing a sequence of searches in a binary search tree. After each search, an algorithm is allowed to arbitrarily restructure the tree, at a cost proportional to the amount of restructuring performed. The cost of an execution is the sum of the time spent searching and the time spent optimizing those searches with restructuring operations. This notion was introduced by Sleator and Tarjan in (JACM, 1985), along with an algorithm and a conjecture. The algorithm, Splay, is an elegant procedure for performing adjustments while moving searched items to the top of the tree. The conjecture, called "dynamic optimality," is that the cost of splaying is always within a constant factor of the optimal algorithm for performing searches. The conjecture stands to this day. In this work, we attempt to lay the foundations for a proof of the dynamic optimality conjecture.Comment: An earlier version of this work appeared in the Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. arXiv admin note: text overlap with arXiv:1907.0630

    Optimal Binary Search Trees with Near Minimal Height

    Full text link
    Suppose we have n keys, n access probabilities for the keys, and n+1 access probabilities for the gaps between the keys. Let h_min(n) be the minimal height of a binary search tree for n keys. We consider the problem to construct an optimal binary search tree with near minimal height, i.e.\ with height h <= h_min(n) + Delta for some fixed Delta. It is shown, that for any fixed Delta optimal binary search trees with near minimal height can be constructed in time O(n^2). This is as fast as in the unrestricted case. So far, the best known algorithms for the construction of height-restricted optimal binary search trees have running time O(L n^2), whereby L is the maximal permitted height. Compared to these algorithms our algorithm is at least faster by a factor of log n, because L is lower bounded by log n
    • …
    corecore