8,540 research outputs found

    The analysis of increasing trees and other families of trees

    Get PDF
    9502325T Faculty of Science School of MathematicsAbstract Increasing trees are labelled rooted trees in which labels along any branch from the root appear in increasing order. They have numerous applications in tree representations of permutations, data structures in computer science and probabilistic models in a multitude of problems. We use a generating function approach for the computation of parameters arising from such trees. The generating functions for some parameters are shown to be related to ordinary differential equations. Singularity analysis is then used to analyze several parameters of the trees asymptotically.Various classes of trees are considered. Parameters such as depth and path length for heap ordered trees have been analyzed in [35]. We follow a similar approach to determine grand averages for such trees. The model is that p of the n nodes are labelled at random in ôn p(ways, and the characteristic parameters depend on these labelled nodes. Also, we will attempt to look at the limiting distributions involved. Often, when they are Gaussian, Hwang's quasi power theorem, from [18], can be employed. Spanning tree size and the Wiener index for binary search trees have been computed in [33]. The Wiener index is the sum of all distances between pairs of nodes in a tree. Arelated parameter of interest is the Steiner distance which generalises, to sets of k nodes, the Wiener index (k=2). Furthermore, the distribution of the size of the ancestor-tree and of the induced spanning subtree for random trees is presented in [30]. The same procedure is followed to obtain the Steiner distance for heap ordered trees and for other varieties of increasing trees

    On Automated Lemma Generation for Separation Logic with Inductive Definitions

    Get PDF
    Separation Logic with inductive definitions is a well-known approach for deductive verification of programs that manipulate dynamic data structures. Deciding verification conditions in this context is usually based on user-provided lemmas relating the inductive definitions. We propose a novel approach for generating these lemmas automatically which is based on simple syntactic criteria and deterministic strategies for applying them. Our approach focuses on iterative programs, although it can be applied to recursive programs as well, and specifications that describe not only the shape of the data structures, but also their content or their size. Empirically, we find that our approach is powerful enough to deal with sophisticated benchmarks, e.g., iterative procedures for searching, inserting, or deleting elements in sorted lists, binary search tress, red-black trees, and AVL trees, in a very efficient way

    Synthesizing Short-Circuiting Validation of Data Structure Invariants

    Full text link
    This paper presents incremental verification-validation, a novel approach for checking rich data structure invariants expressed as separation logic assertions. Incremental verification-validation combines static verification of separation properties with efficient, short-circuiting dynamic validation of arbitrarily rich data constraints. A data structure invariant checker is an inductive predicate in separation logic with an executable interpretation; a short-circuiting checker is an invariant checker that stops checking whenever it detects at run time that an assertion for some sub-structure has been fully proven statically. At a high level, our approach does two things: it statically proves the separation properties of data structure invariants using a static shape analysis in a standard way but then leverages this proof in a novel manner to synthesize short-circuiting dynamic validation of the data properties. As a consequence, we enable dynamic validation to make up for imprecision in sound static analysis while simultaneously leveraging the static verification to make the remaining dynamic validation efficient. We show empirically that short-circuiting can yield asymptotic improvements in dynamic validation, with low overhead over no validation, even in cases where static verification is incomplete

    Why some heaps support constant-amortized-time decrease-key operations, and others do not

    Full text link
    A lower bound is presented which shows that a class of heap algorithms in the pointer model with only heap pointers must spend Omega(log log n / log log log n) amortized time on the decrease-key operation (given O(log n) amortized-time extract-min). Intuitively, this bound shows the key to having O(1)-time decrease-key is the ability to sort O(log n) items in O(log n) time; Fibonacci heaps [M.L. Fredman and R. E. Tarjan. J. ACM 34(3):596-615 (1987)] do this through the use of bucket sort. Our lower bound also holds no matter how much data is augmented; this is in contrast to the lower bound of Fredman [J. ACM 46(4):473-501 (1999)] who showed a tradeoff between the number of augmented bits and the amortized cost of decrease-key. A new heap data structure, the sort heap, is presented. This heap is a simplification of the heap of Elmasry [SODA 2009: 471-476] and shares with it a O(log log n) amortized-time decrease-key, but with a straightforward implementation such that our lower bound holds. Thus a natural model is presented for a pointer-based heap such that the amortized runtime of a self-adjusting structure and amortized lower asymptotic bounds for decrease-key differ by but a O(log log log n) factor

    Mask formulas for cograssmannian Kazhdan-Lusztig polynomials

    Full text link
    We give two contructions of sets of masks on cograssmannian permutations that can be used in Deodhar's formula for Kazhdan-Lusztig basis elements of the Iwahori-Hecke algebra. The constructions are respectively based on a formula of Lascoux-Schutzenberger and its geometric interpretation by Zelevinsky. The first construction relies on a basis of the Hecke algebra constructed from principal lower order ideals in Bruhat order and a translation of this basis into sets of masks. The second construction relies on an interpretation of masks as cells of the Bott-Samelson resolution. These constructions give distinct answers to a question of Deodhar.Comment: 43 page

    On the tradeoff between stability and fit

    Get PDF
    In computing, as in many aspects of life, changes incur cost. Many optimization problems are formulated as a one-time instance starting from scratch. However, a common case that arises is when we already have a set of prior assignments and must decide how to respond to a new set of constraints, given that each change from the current assignment comes at a price. That is, we would like to maximize the fitness or efficiency of our system, but we need to balance it with the changeout cost from the previous state. We provide a precise formulation for this tradeoff and analyze the resulting stable extensions of some fundamental problems in measurement and analytics. Our main technical contribution is a stable extension of Probability Proportional to Size (PPS) weighted random sampling, with applications to monitoring and anomaly detection problems. We also provide a general framework that applies to top-k, minimum spanning tree, and assignment. In both cases, we are able to provide exact solutions and discuss efficient incremental algorithms that can find new solutions as the input changes

    Efficient Management of Short-Lived Data

    Full text link
    Motivated by the increasing prominence of loosely-coupled systems, such as mobile and sensor networks, which are characterised by intermittent connectivity and volatile data, we study the tagging of data with so-called expiration times. More specifically, when data are inserted into a database, they may be tagged with time values indicating when they expire, i.e., when they are regarded as stale or invalid and thus are no longer considered part of the database. In a number of applications, expiration times are known and can be assigned at insertion time. We present data structures and algorithms for online management of data tagged with expiration times. The algorithms are based on fully functional, persistent treaps, which are a combination of binary search trees with respect to a primary attribute and heaps with respect to a secondary attribute. The primary attribute implements primary keys, and the secondary attribute stores expiration times in a minimum heap, thus keeping a priority queue of tuples to expire. A detailed and comprehensive experimental study demonstrates the well-behavedness and scalability of the approach as well as its efficiency with respect to a number of competitors.Comment: switched to TimeCenter latex styl

    The Tree Width of Separation Logic with Recursive Definitions

    Full text link
    Separation Logic is a widely used formalism for describing dynamically allocated linked data structures, such as lists, trees, etc. The decidability status of various fragments of the logic constitutes a long standing open problem. Current results report on techniques to decide satisfiability and validity of entailments for Separation Logic(s) over lists (possibly with data). In this paper we establish a more general decidability result. We prove that any Separation Logic formula using rather general recursively defined predicates is decidable for satisfiability, and moreover, entailments between such formulae are decidable for validity. These predicates are general enough to define (doubly-) linked lists, trees, and structures more general than trees, such as trees whose leaves are chained in a list. The decidability proofs are by reduction to decidability of Monadic Second Order Logic on graphs with bounded tree width.Comment: 30 pages, 2 figure
    corecore