89 research outputs found
Priority Queues with Multiple Time Fingers
A priority queue is presented that supports the operations insert and
find-min in worst-case constant time, and delete and delete-min on element x in
worst-case O(lg(min{w_x, q_x}+2)) time, where w_x (respectively q_x) is the
number of elements inserted after x (respectively before x) and are still
present at the time of the deletion of x. Our priority queue then has both the
working-set and the queueish properties, and more strongly it satisfies these
properties in the worst-case sense. We also define a new distribution-sensitive
property---the time-finger property, which encapsulates and generalizes both
the working-set and queueish properties, and present a priority queue that
satisfies this property.
In addition, we prove a strong implication that the working-set property is
equivalent to the unified bound (which is the minimum per operation among the
static finger, static optimality, and the working-set bounds). This latter
result is of tremendous interest by itself as it had gone unnoticed since the
introduction of such bounds by Sleater and Tarjan [JACM 1985]. Accordingly, our
priority queue satisfies other distribution-sensitive properties as the static
finger, static optimality, and the unified bound.Comment: 14 pages, 4 figure
Smooth heaps and a dual view of self-adjusting data structures
We present a new connection between self-adjusting binary search trees (BSTs)
and heaps, two fundamental, extensively studied, and practically relevant
families of data structures. Roughly speaking, we map an arbitrary heap
algorithm within a natural model, to a corresponding BST algorithm with the
same cost on a dual sequence of operations (i.e. the same sequence with the
roles of time and key-space switched). This is the first general transformation
between the two families of data structures.
There is a rich theory of dynamic optimality for BSTs (i.e. the theory of
competitiveness between BST algorithms). The lack of an analogous theory for
heaps has been noted in the literature. Through our connection, we transfer all
instance-specific lower bounds known for BSTs to a general model of heaps,
initiating a theory of dynamic optimality for heaps.
On the algorithmic side, we obtain a new, simple and efficient heap
algorithm, which we call the smooth heap. We show the smooth heap to be the
heap-counterpart of Greedy, the BST algorithm with the strongest proven and
conjectured properties from the literature, widely believed to be
instance-optimal. Assuming the optimality of Greedy, the smooth heap is also
optimal within our model of heap algorithms. As corollaries of results known
for Greedy, we obtain instance-specific upper bounds for the smooth heap, with
applications in adaptive sorting.
Intriguingly, the smooth heap, although derived from a non-practical BST
algorithm, is simple and easy to implement (e.g. it stores no auxiliary data
besides the keys and tree pointers). It can be seen as a variation on the
popular pairing heap data structure, extending it with a "power-of-two-choices"
type of heuristic.Comment: Presented at STOC 2018, light revision, additional figure
New Paths from Splay to Dynamic Optimality
Consider the task of performing a sequence of searches in a binary search
tree. After each search, an algorithm is allowed to arbitrarily restructure the
tree, at a cost proportional to the amount of restructuring performed. The cost
of an execution is the sum of the time spent searching and the time spent
optimizing those searches with restructuring operations. This notion was
introduced by Sleator and Tarjan in (JACM, 1985), along with an algorithm and a
conjecture. The algorithm, Splay, is an elegant procedure for performing
adjustments while moving searched items to the top of the tree. The conjecture,
called "dynamic optimality," is that the cost of splaying is always within a
constant factor of the optimal algorithm for performing searches. The
conjecture stands to this day. In this work, we attempt to lay the foundations
for a proof of the dynamic optimality conjecture.Comment: An earlier version of this work appeared in the Proceedings of the
Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. arXiv admin note:
text overlap with arXiv:1907.0630
The Landscape of Bounds for Binary Search Trees
Binary search trees (BSTs) with rotations can adapt to various kinds of structure in search sequences, achieving amortized access times substantially better than the Theta(log n) worst-case guarantee. Classical examples of structural properties include static optimality, sequential access, working set, key-independent optimality, and dynamic finger, all of which are now known to be achieved by the two famous online BST algorithms (Splay and Greedy). (...) In this paper, we introduce novel properties that explain the efficiency of sequences not captured by any of the previously known properties, and which provide new barriers to the dynamic optimality conjecture. We also establish connections between various properties, old and new. For instance, we show the following. (i) A tight bound of O(n log d) on the cost of Greedy for d-decomposable sequences. The result builds on the recent lazy finger result of Iacono and Langerman (SODA 2016). On the other hand, we show that lazy finger alone cannot explain the efficiency of pattern avoiding sequences even in some of the simplest cases. (ii) A hierarchy of bounds using multiple lazy fingers, addressing a recent question of Iacono and Langerman. (iii) The optimality of the Move-to-root heuristic in the key-independent setting introduced by Iacono (Algorithmica 2005). (iv) A new tool that allows combining any finite number of sound structural properties. As an application, we show an upper bound on the cost of a class of sequences that all known properties fail to capture. (v) The equivalence between two families of BST properties. The observation on which this connection is based was known before - we make it explicit, and apply it to classical BST properties. (...
- …