477 research outputs found
Why some heaps support constant-amortized-time decrease-key operations, and others do not
A lower bound is presented which shows that a class of heap algorithms in the
pointer model with only heap pointers must spend Omega(log log n / log log log
n) amortized time on the decrease-key operation (given O(log n) amortized-time
extract-min). Intuitively, this bound shows the key to having O(1)-time
decrease-key is the ability to sort O(log n) items in O(log n) time; Fibonacci
heaps [M.L. Fredman and R. E. Tarjan. J. ACM 34(3):596-615 (1987)] do this
through the use of bucket sort. Our lower bound also holds no matter how much
data is augmented; this is in contrast to the lower bound of Fredman [J. ACM
46(4):473-501 (1999)] who showed a tradeoff between the number of augmented
bits and the amortized cost of decrease-key. A new heap data structure, the
sort heap, is presented. This heap is a simplification of the heap of Elmasry
[SODA 2009: 471-476] and shares with it a O(log log n) amortized-time
decrease-key, but with a straightforward implementation such that our lower
bound holds. Thus a natural model is presented for a pointer-based heap such
that the amortized runtime of a self-adjusting structure and amortized lower
asymptotic bounds for decrease-key differ by but a O(log log log n) factor
The Fresh-Finger Property
The unified property roughly states that searching for an element is fast
when the current access is close to a recent access. Here, "close" refers to
rank distance measured among all elements stored by the dictionary. We show
that distance need not be measured this way: in fact, it is only necessary to
consider a small working-set of elements to measure this rank distance. This
results in a data structure with access time that is an improvement upon those
offered by the unified property for many query sequences
A Tight Lower Bound for Decrease-Key in the Pure Heap Model
We improve the lower bound on the amortized cost of the decrease-key
operation in the pure heap model and show that any pure-heap-model heap (that
has a \bigoh{\log n} amortized-time extract-min operation) must spend
\bigom{\log\log n} amortized time on the decrease-key operation. Our result
shows that sort heaps as well as pure-heap variants of numerous other heaps
have asymptotically optimal decrease-key operations in the pure heap model. In
addition, our improved lower bound matches the lower bound of Fredman [J. ACM
46(4):473-501 (1999)] for pairing heaps [M.L. Fredman, R. Sedgewick, D.D.
Sleator, and R.E. Tarjan. Algorithmica 1(1):111-129 (1986)] and surpasses it
for pure-heap variants of numerous other heaps with augmented data such as
pointer rank-pairing heaps.Comment: arXiv admin note: substantial text overlap with arXiv:1302.664
Weighted dynamic finger in binary search trees
It is shown that the online binary search tree data structure GreedyASS
performs asymptotically as well on a sufficiently long sequence of searches as
any static binary search tree where each search begins from the previous search
(rather than the root). This bound is known to be equivalent to assigning each
item in the search tree a positive weight and bounding the search
cost of an item in the search sequence by
amortized. This result is the strongest finger-type bound to be proven for
binary search trees. By setting the weights to be equal, one observes that our
bound implies the dynamic finger bound. Compared to the previous proof of the
dynamic finger bound for Splay trees, our result is significantly shorter,
stronger, simpler, and has reasonable constants.Comment: An earlier version of this work appeared in the Proceedings of the
Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithm
Using Hashing to Solve the Dictionary Problem (In External Memory)
We consider the dictionary problem in external memory and improve the update
time of the well-known buffer tree by roughly a logarithmic factor. For any
\lambda >= max {lg lg n, log_{M/B} (n/B)}, we can support updates in time
O(\lambda / B) and queries in sublogarithmic time, O(log_\lambda n). We also
present a lower bound in the cell-probe model showing that our data structure
is optimal.
In the RAM, hash tables have been used to solve the dictionary problem faster
than binary search for more than half a century. By contrast, our data
structure is the first to beat the comparison barrier in external memory. Ours
is also the first data structure to depart convincingly from the indivisibility
paradigm
Packing identical simple polygons is NP-hard
Given a small polygon S, a big simple polygon B and a positive integer k, it
is shown to be NP-hard to determine whether k copies of the small polygon
(allowing translation and rotation) can be placed in the big polygon without
overlap. Previous NP-hardness results were only known in the case where the big
polygon is allowed to be non-simple. A novel reduction from Planar-Circuit-SAT
is presented where a small polygon is constructed to encode the entire circuit
Solving -SUM using few linear queries
The -SUM problem is given input real numbers to determine whether any
of them sum to zero. The problem is of tremendous importance in the
emerging field of complexity theory within , and it is in particular open
whether it admits an algorithm of complexity with . Inspired by an algorithm due to Meiser (1993), we show
that there exist linear decision trees and algebraic computation trees of depth
solving -SUM. Furthermore, we show that there exists a
randomized algorithm that runs in
time, and performs linear queries on the input. Thus, we show
that it is possible to have an algorithm with a runtime almost identical (up to
the ) to the best known algorithm but for the first time also with the
number of queries on the input a polynomial that is independent of . The
bound on the number of linear queries is also a tighter bound
than any known algorithm solving -SUM, even allowing unlimited total time
outside of the queries. By simultaneously achieving few queries to the input
without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we
deepen the understanding of this canonical problem which is a cornerstone of
complexity-within-.
We also consider a range of tradeoffs between the number of terms involved in
the queries and the depth of the decision tree. In particular, we prove that
there exist -linear decision trees of depth
- …