8 research outputs found

    Dynamic algorithms in D.E. Knuth's model: a probabilistic analysis

    Get PDF
    AbstractBy dynamic algorithms we mean algorithms that operate on dynamically varying data structures (dictionaries, priority queues, linear lists) subject to insertions I, deletions D, positive (negative) queries Q+ (Q−). Let us remember that dictionaries are implementable by unsorted or sorted lists, binary search trees, priority queues by sorted lists, binary search trees, binary tournaments, pagodas, binomial queues and linear lists by sorted or unsorted lists, etc. At this point the following question is very natural in computer science: for a given data structure, which representation is the most efficient? In comparing the space or time costs of two data organizations A and B for the same operations, we cannot merely compare the costs of individual operations for data of given sizes: A may be better than B on some data, and vice versa on others. A reasonable way to measure the efficiency of a data organization is to consider sequences of operations on the structure. Françon (1978, 1979) Knuth (1977) discovered that the number of possibilities for the ith insertion or negative query is equal to i, but that for deletions and positive queries this number depends on the size of the data structure. Answering the questions raised by Françon and Knuth is the main object of this paper more precisely, we show •how to obtain limiting processes;•how to compute explicitly the average costs;•how to obtain variance estimates;•that the costs coverage as n → ∞ to random variables, either Gaussian or depending on Brownian excursion functionals (the limiting distributions are, therefore, completely described).To our knowledge such a complete analysis has never been done before dynamic algorithms in Knuth's model

    Dynamic algorithms in D.E. Knuth's model: A probabilistic analysis

    No full text
    By dynamic algorithms, we mean algorithms that operate on dynamically varying data structures (dictionaries, priority queues, linear lists) subject to insertions I, deletions D, positive (resp. negative) queries Q+ (resp. Q−). Let us remember that dictionaries are implementable by unsorted or sorted lists, binary search trees, priority queues by sorted lists, binary search trees, binary tournaments, pagodas, binomial queues and linear lists by sorted or unsorted lists etc. At this point the following question is very natural in computer science: for a given data structure which representation is the most efficient? In comparing the space or time costs of two data organizations A and B for the same operations, we cannot merely compare the costs of individual operations for data of given sizes: A may be better than B on some data, and conversely on others. A reasonable way to measure the efficiency of a data organization is to consider sequences of operations on the structure. J. Françon [6], [7] and D.E. Knuth [12] discovered that the number of possibilities for the i-th insertion or negative query is equal to i but that for deletions and positive queries this number depends of the size of the data structure. Answering the questions raised in [6], [7] and [12] is the main object of this paper. More precisely we show:i)how to compute explicitely the average costs,ii)how to obtain variance estimates,iii)that the costs converge as n→∞ to random variables either Gaussian or depending on Brownian Excursions functionals (the limiting distributions are therefore completely described). At our knowledge such a complete analysis has never been done before for dynamic algorithms in Knuth's model.SCOPUS: cp.kinfo:eu-repo/semantics/publishe

    Analysis of dynamic algorithms in Knuth's model

    Get PDF
    AbstractThis paper analyzes the average behaviour of algorithms that operate on dynamically varying data structures subject to insertions I, deletions D, positive (resp. negative) queries Q+ (resp. Q-) under the following assumptions: if the size of the data structure is k (k ϵ N), then the number of possibilities for the operations D and Q+ is a linear function of k, whereas the number of possibilities for the ith insertion or negative query is equal to i. This statistical model was introduced by Françon [6,7] and Knuth [12] and differs from the model used in previous analyses [2–7]. Integrated costs for these dynamic structures are defined as averages of costs taken over the set of all their possible histories (i.e. evolutions considered up to order isomorphism) of length n. We show that the costs can be calculated for the data structures serving as implementations of linear lists, priority queues and dictionaries. The problem of finding the limiting distributions is also considered and the linear list case is treated in detail. The method uses continued fractions and orthogonal polynomials but in a paper in preparation, we show that the same results can be recovered with the help of a probabilistic model
    corecore