80 research outputs found

    Thirty nine years of stratified trees

    Get PDF
    The stratified tree, also called van Emde Boas tree, is a data structure implementing the full repertoire of instructions manipulating a single subset AAof a finite ordered Universe U=[0...u1]U = [0 ... u-1]. Instructions include membermember, insertinsert, deletedelete, minmin, maxmax, predecessorpredecessor and successorsuccessor, as well as composite ones like extractminextract-min. The processing time per instruction is O(loglog(u))O(loglog(u)). Hence it improves upon the traditional comparison based tree structures for dense subsets AA; if AA is sparse, meaning that the size n = # A = O(log(u)) the improvement vanishes. Examples exist where this improvement helps to speed-up algorithmic solutions of real problems; such applications can be found for example in graph algorithms, computational geometry and forwarding of packets on the internet. The structure was invented during a short postdoc residence at Cornell University in the fall of 1974. In the sequel of this paper I will use the original name Stratified Trees which was used in my own papers on this data structure. There are two strategies for understanding how this O(loglog(u))O(loglog(u)) improvement can be obtained. Today a direct recursive approach is used where the universe is divided into a cluster of sqrtusqrt{u} galaxies each of size sqrtusqrt{u}; the set manipulation instructions decompose accordingly in a instruction at the cluster and galaxy level, but one of these two instructions is always of a special trivial type. The processing complexity thus satisfies a recurrence T(u)=T(sqrtu)+O(1)T(u) = T(sqrt{u}) + O(1). Consequently T(u)=O(loglog(u))T(u) = O(loglog(u)). However, this recursive approach requires address calculations on the arguments which use multiplicative arithmetical instructions. These instructions are not allowed in the Random Access Machine model (RAM) which was the standard model in the developing research area of design and analysis of algorithms in 1974. Therefore the early implementations of the stratified trees are based on a different approach which best is described as a binary-search-on-levels strategy. In this approach the address calculations are not required, and the structure can be implemented using pointers. The downside of this approach is that it leads to rather complex algorithms, which are still hard to present correctly even today. Another bad consequence was the super linear space consumption of the data structure, which was only eliminated three years later. In this paper I want to describe the historical backgrounds against which the stratified trees were discovered and implemented. I do not give complete code fragments implementing the data structure and the operations; they can be found in the various textbooks and papers mentioned, including a Wikipedia page. Code fragments appearing in this paper are copied verbatim from the original sources; the same holds for the figures

    Static Data Structure Lower Bounds Imply Rigidity

    Full text link
    We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of tω(log2n)t \geq \omega(\log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s=(1+ε)n)(s= (1+\varepsilon)n), would already imply a semi-explicit (PNP\bf P^{NP}\rm) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (tnδt\geq n^{\delta}) data structure lower bounds against near-optimal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s=n+o(n))(s=n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak, 2006), and on a new reduction from worst-case to average-case rigidity, which is of independent interest

    The Speedup Theorem in a Primitive Recursive Framework

    Full text link
    Blum’s speedup theorem is a major theorem in computational com-plexity, showing the existence of computable functions for which no optimal program can exist: for any speedup function r there ex-ists a function fr such that for any program computing fr we can find an alternative program computing it with the desired speedup r. The main corollary is that algorithmic problems do not have, in general, a inherent complexity. Traditional proofs of the speedup theorem make an essential use of Kleene’s fix point theorem to close a suitable diagonal argument. As a consequence, very little is known about its validity in subrecursive settings, where there is no universal machine, and no fixpoints. In this article we discuss an alternative, formal proof of the speedup theorem that allows us to spare the invocation of the fix point theorem and sheds more light on the actual complexity of the function fr

    Gap and operator gap

    No full text

    Ruimten met minimale basis

    No full text

    Minimality of subbases and bases of topological spaces

    No full text

    A note on the mccreight-meyer naming theorem in the theory of computational complexity

    No full text

    An algol-60 algorithm for the verification of a combinatorial conjecture on a finite abelian group

    No full text
    corecore