33 research outputs found

    Improved Pattern-Avoidance Bounds for Greedy BSTs via Matrix Decomposition

    Full text link
    Greedy BST (or simply Greedy) is an online self-adjusting binary search tree defined in the geometric view ([Lucas, 1988; Munro, 2000; Demaine, Harmon, Iacono, Kane, Patrascu, SODA 2009). Along with Splay trees (Sleator, Tarjan 1985), Greedy is considered the most promising candidate for being dynamically optimal, i.e., starting with any initial tree, their access costs on any sequence is conjectured to be within O(1)O(1) factor of the offline optimal. However, in the past four decades, the question has remained elusive even for highly restricted input. In this paper, we prove new bounds on the cost of Greedy in the ''pattern avoidance'' regime. Our new results include: The (preorder) traversal conjecture for Greedy holds up to a factor of O(2α(n))O(2^{\alpha(n)}), improving upon the bound of 2α(n)O(1)2^{\alpha(n)^{O(1)}} in (Chalermsook et al., FOCS 2015). This is the best known bound obtained by any online BSTs. We settle the postorder traversal conjecture for Greedy. The deque conjecture for Greedy holds up to a factor of O(α(n))O(\alpha(n)), improving upon the bound 2O(α(n))2^{O(\alpha(n))} in (Chalermsook, et al., WADS 2015). The split conjecture holds for Greedy up to a factor of O(2α(n))O(2^{\alpha(n)}). Key to all these results is to partition (based on the input structures) the execution log of Greedy into several simpler-to-analyze subsets for which classical forbidden submatrix bounds can be leveraged. Finally, we show the applicability of this technique to handle a class of increasingly complex pattern-avoiding input sequences, called kk-increasing sequences. As a bonus, we discover a new class of permutation matrices whose extremal bounds are polynomially bounded. This gives a partial progress on an open question by Jacob Fox (2013).Comment: Accepted to SODA 202

    Smooth heaps and a dual view of self-adjusting data structures

    Full text link
    We present a new connection between self-adjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures. Roughly speaking, we map an arbitrary heap algorithm within a natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and key-space switched). This is the first general transformation between the two families of data structures. There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature. Through our connection, we transfer all instance-specific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps. On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heap-counterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, widely believed to be instance-optimal. Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms. As corollaries of results known for Greedy, we obtain instance-specific upper bounds for the smooth heap, with applications in adaptive sorting. Intriguingly, the smooth heap, although derived from a non-practical BST algorithm, is simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a "power-of-two-choices" type of heuristic.Comment: Presented at STOC 2018, light revision, additional figure

    New Paths from Splay to Dynamic Optimality

    Full text link
    Consider the task of performing a sequence of searches in a binary search tree. After each search, an algorithm is allowed to arbitrarily restructure the tree, at a cost proportional to the amount of restructuring performed. The cost of an execution is the sum of the time spent searching and the time spent optimizing those searches with restructuring operations. This notion was introduced by Sleator and Tarjan in (JACM, 1985), along with an algorithm and a conjecture. The algorithm, Splay, is an elegant procedure for performing adjustments while moving searched items to the top of the tree. The conjecture, called "dynamic optimality," is that the cost of splaying is always within a constant factor of the optimal algorithm for performing searches. The conjecture stands to this day. In this work, we attempt to lay the foundations for a proof of the dynamic optimality conjecture.Comment: An earlier version of this work appeared in the Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. arXiv admin note: text overlap with arXiv:1907.0630

    Splaying Preorders and Postorders

    Full text link
    Let TT be a binary search tree. We prove two results about the behavior of the Splay algorithm (Sleator and Tarjan 1985). Our first result is that inserting keys into an empty binary search tree via splaying in the order of either TT's preorder or TT's postorder takes linear time. Our proof uses the fact that preorders and postorders are pattern-avoiding: i.e. they contain no subsequences that are order-isomorphic to (2,3,1)(2,3,1) and (3,1,2)(3,1,2), respectively. Pattern-avoidance implies certain constraints on the manner in which items are inserted. We exploit this structure with a simple potential function that counts inserted nodes lying on access paths to uninserted nodes. Our methods can likely be extended to permutations that avoid more general patterns. Second, if T′T' is any other binary search tree with the same keys as TT and TT is weight-balanced (Nievergelt and Reingold 1973), then splaying TT's preorder sequence or TT's postorder sequence starting from T′T' takes linear time. To prove this, we demonstrate that preorders and postorders of balanced search trees do not contain many large "jumps" in symmetric order, and exploit this fact by using the dynamic finger theorem (Cole et al. 2000). Both of our results provide further evidence in favor of the elusive "dynamic optimality conjecture.

    Sorting Pattern-Avoiding Permutations via 0-1 Matrices Forbidding Product Patterns

    Full text link
    We consider the problem of comparison-sorting an nn-permutation SS that avoids some kk-permutation π\pi. Chalermsook, Goswami, Kozma, Mehlhorn, and Saranurak prove that when SS is sorted by inserting the elements into the GreedyFuture binary search tree, the running time is linear in the extremal function Ex(Pπ⊗hat,n)\mathrm{Ex}(P_\pi\otimes \text{hat},n). This is the maximum number of 1s in an n×nn\times n 0-1 matrix avoiding Pπ⊗hatP_\pi \otimes \text{hat}, where PπP_\pi is the k×kk\times k permutation matrix of π\pi, ⊗\otimes the Kronecker product, and hat=(∙∙∙)\text{hat} = \left(\begin{array}{ccc}&\bullet&\\\bullet&&\bullet\end{array}\right). The same time bound can be achieved by sorting SS with Kozma and Saranurak's SmoothHeap. In this paper we give nearly tight upper and lower bounds on the density of Pπ⊗hatP_\pi\otimes\text{hat}-free matrices in terms of the inverse-Ackermann function α(n)\alpha(n). \mathrm{Ex}(P_\pi\otimes \text{hat},n) = \left\{\begin{array}{ll} \Omega(n\cdot 2^{\alpha(n)}), & \mbox{for most $\pi$,}\\ O(n\cdot 2^{O(k^2)+(1+o(1))\alpha(n)}), & \mbox{for all $\pi$.} \end{array}\right. As a consequence, sorting π\pi-free sequences can be performed in O(n2(1+o(1))α(n))O(n2^{(1+o(1))\alpha(n)}) time. For many corollaries of the dynamic optimality conjecture, the best analysis uses forbidden 0-1 matrix theory. Our analysis may be useful in analyzing other classes of access sequences on binary search trees

    The Landscape of Bounds for Binary Search Trees

    No full text
    Binary search trees (BSTs) with rotations can adapt to various kinds of structure in search sequences, achieving amortized access times substantially better than the Theta(log n) worst-case guarantee. Classical examples of structural properties include static optimality, sequential access, working set, key-independent optimality, and dynamic finger, all of which are now known to be achieved by the two famous online BST algorithms (Splay and Greedy). (...) In this paper, we introduce novel properties that explain the efficiency of sequences not captured by any of the previously known properties, and which provide new barriers to the dynamic optimality conjecture. We also establish connections between various properties, old and new. For instance, we show the following. (i) A tight bound of O(n log d) on the cost of Greedy for d-decomposable sequences. The result builds on the recent lazy finger result of Iacono and Langerman (SODA 2016). On the other hand, we show that lazy finger alone cannot explain the efficiency of pattern avoiding sequences even in some of the simplest cases. (ii) A hierarchy of bounds using multiple lazy fingers, addressing a recent question of Iacono and Langerman. (iii) The optimality of the Move-to-root heuristic in the key-independent setting introduced by Iacono (Algorithmica 2005). (iv) A new tool that allows combining any finite number of sound structural properties. As an application, we show an upper bound on the cost of a class of sequences that all known properties fail to capture. (v) The equivalence between two families of BST properties. The observation on which this connection is based was known before - we make it explicit, and apply it to classical BST properties. (...

    On Dynamic Optimality for Binary Search Trees

    Full text link
    Does there exist O(1)-competitive (self-adjusting) binary search tree (BST) algorithms? This is a well-studied problem. A simple offline BST algorithm GreedyFuture was proposed independently by Lucas and Munro, and they conjectured it to be O(1)-competitive. Recently, Demaine et al. gave a geometric view of the BST problem. This view allowed them to give an online algorithm GreedyArb with the same cost as GreedyFuture. However, no o(n)-competitive ratio was known for GreedyArb. In this paper we make progress towards proving O(1)-competitive ratio for GreedyArb by showing that it is O(\log n)-competitive
    corecore