80 research outputs found
Thirty nine years of stratified trees
The stratified tree, also called van Emde Boas tree, is a data structure implementing the full repertoire of instructions manipulating a single subset of a finite ordered Universe . Instructions include , , , , , and , as well as composite ones like . The processing time per instruction is . Hence it improves upon the traditional comparison based tree structures for dense subsets ; if is sparse, meaning that the size n = # A = O(log(u)) the improvement vanishes.
Examples exist where this improvement helps to speed-up algorithmic solutions of real problems; such applications can be found for example in graph algorithms, computational geometry and forwarding of packets on the internet.
The structure was invented during a short postdoc residence at Cornell University in the fall of 1974. In the sequel of this paper I will use the original name Stratified Trees which was used in my own papers on this data structure.
There are two strategies for understanding how this improvement can be obtained. Today a direct recursive approach is used where the universe is divided into a cluster of galaxies each of size ; the set manipulation instructions decompose accordingly in a instruction at the cluster and galaxy level, but one of these two instructions is always of a special trivial type. The processing complexity thus satisfies a recurrence . Consequently .
However, this recursive approach requires address calculations on the arguments which use multiplicative arithmetical instructions. These instructions are not allowed in the Random Access Machine model (RAM) which was the standard model in the developing research area of design and analysis of algorithms in 1974. Therefore the early implementations of the stratified trees are based on a different approach which best is described as a binary-search-on-levels strategy. In this approach the address calculations are not required, and the structure can be implemented using pointers. The downside of this approach is that it leads to rather complex algorithms, which are still hard to present correctly even today. Another bad consequence was the super linear space consumption of the data structure, which was only eliminated three years later.
In this paper I want to describe the historical backgrounds against which the stratified trees were discovered and implemented. I do not give complete code fragments implementing the data structure and the operations; they can be found in the various textbooks and papers mentioned, including a Wikipedia page. Code fragments appearing in this paper are copied verbatim from the original sources; the same holds for the figures
Static Data Structure Lower Bounds Imply Rigidity
We show that static data structure lower bounds in the group (linear) model
imply semi-explicit lower bounds on matrix rigidity. In particular, we prove
that an explicit lower bound of on the cell-probe
complexity of linear data structures in the group model, even against
arbitrarily small linear space , would already imply a
semi-explicit () construction of rigid matrices with
significantly better parameters than the current state of art (Alon, Panigrahy
and Yekhanin, 2009). Our results further assert that polynomial () data structure lower bounds against near-optimal space, would
imply super-linear circuit lower bounds for log-depth linear circuits (a
four-decade open question). In the succinct space regime , we show
that any improvement on current cell-probe lower bounds in the linear model
would also imply new rigidity bounds. Our results rely on a new connection
between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak,
2006), and on a new reduction from worst-case to average-case rigidity, which
is of independent interest
The Speedup Theorem in a Primitive Recursive Framework
Blum’s speedup theorem is a major theorem in computational com-plexity, showing the existence of computable functions for which no optimal program can exist: for any speedup function r there ex-ists a function fr such that for any program computing fr we can find an alternative program computing it with the desired speedup r. The main corollary is that algorithmic problems do not have, in general, a inherent complexity. Traditional proofs of the speedup theorem make an essential use of Kleene’s fix point theorem to close a suitable diagonal argument. As a consequence, very little is known about its validity in subrecursive settings, where there is no universal machine, and no fixpoints. In this article we discuss an alternative, formal proof of the speedup theorem that allows us to spare the invocation of the fix point theorem and sheds more light on the actual complexity of the function fr
- …