5,335 research outputs found

    Smooth heaps and a dual view of self-adjusting data structures

    Full text link
    We present a new connection between self-adjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures. Roughly speaking, we map an arbitrary heap algorithm within a natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and key-space switched). This is the first general transformation between the two families of data structures. There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature. Through our connection, we transfer all instance-specific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps. On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heap-counterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, widely believed to be instance-optimal. Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms. As corollaries of results known for Greedy, we obtain instance-specific upper bounds for the smooth heap, with applications in adaptive sorting. Intriguingly, the smooth heap, although derived from a non-practical BST algorithm, is simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a "power-of-two-choices" type of heuristic.Comment: Presented at STOC 2018, light revision, additional figure

    A polynomial delay algorithm for the enumeration of bubbles with length constraints in directed graphs and its application to the detection of alternative splicing in RNA-seq data

    Full text link
    We present a new algorithm for enumerating bubbles with length constraints in directed graphs. This problem arises in transcriptomics, where the question is to identify all alternative splicing events present in a sample of mRNAs sequenced by RNA-seq. This is the first polynomial-delay algorithm for this problem and we show that in practice, it is faster than previous approaches. This enables us to deal with larger instances and therefore to discover novel alternative splicing events, especially long ones, that were previously overseen using existing methods.Comment: Peer-reviewed and presented as part of the 13th Workshop on Algorithms in Bioinformatics (WABI2013

    On Verifying Complex Properties using Symbolic Shape Analysis

    Get PDF
    One of the main challenges in the verification of software systems is the analysis of unbounded data structures with dynamic memory allocation, such as linked data structures and arrays. We describe Bohne, a new analysis for verifying data structures. Bohne verifies data structure operations and shows that 1) the operations preserve data structure invariants and 2) the operations satisfy their specifications expressed in terms of changes to the set of objects stored in the data structure. During the analysis, Bohne infers loop invariants in the form of disjunctions of universally quantified Boolean combinations of formulas. To synthesize loop invariants of this form, Bohne uses a combination of decision procedures for Monadic Second-Order Logic over trees, SMT-LIB decision procedures (currently CVC Lite), and an automated reasoner within the Isabelle interactive theorem prover. This architecture shows that synthesized loop invariants can serve as a useful communication mechanism between different decision procedures. Using Bohne, we have verified operations on data structures such as linked lists with iterators and back pointers, trees with and without parent pointers, two-level skip lists, array data structures, and sorted lists. We have deployed Bohne in the Hob and Jahob data structure analysis systems, enabling us to combine Bohne with analyses of data structure clients and apply it in the context of larger programs. This report describes the Bohne algorithm as well as techniques that Bohne uses to reduce the ammount of annotations and the running time of the analysis

    Why some heaps support constant-amortized-time decrease-key operations, and others do not

    Full text link
    A lower bound is presented which shows that a class of heap algorithms in the pointer model with only heap pointers must spend Omega(log log n / log log log n) amortized time on the decrease-key operation (given O(log n) amortized-time extract-min). Intuitively, this bound shows the key to having O(1)-time decrease-key is the ability to sort O(log n) items in O(log n) time; Fibonacci heaps [M.L. Fredman and R. E. Tarjan. J. ACM 34(3):596-615 (1987)] do this through the use of bucket sort. Our lower bound also holds no matter how much data is augmented; this is in contrast to the lower bound of Fredman [J. ACM 46(4):473-501 (1999)] who showed a tradeoff between the number of augmented bits and the amortized cost of decrease-key. A new heap data structure, the sort heap, is presented. This heap is a simplification of the heap of Elmasry [SODA 2009: 471-476] and shares with it a O(log log n) amortized-time decrease-key, but with a straightforward implementation such that our lower bound holds. Thus a natural model is presented for a pointer-based heap such that the amortized runtime of a self-adjusting structure and amortized lower asymptotic bounds for decrease-key differ by but a O(log log log n) factor

    A Tight Lower Bound for Decrease-Key in the Pure Heap Model

    Full text link
    We improve the lower bound on the amortized cost of the decrease-key operation in the pure heap model and show that any pure-heap-model heap (that has a \bigoh{\log n} amortized-time extract-min operation) must spend \bigom{\log\log n} amortized time on the decrease-key operation. Our result shows that sort heaps as well as pure-heap variants of numerous other heaps have asymptotically optimal decrease-key operations in the pure heap model. In addition, our improved lower bound matches the lower bound of Fredman [J. ACM 46(4):473-501 (1999)] for pairing heaps [M.L. Fredman, R. Sedgewick, D.D. Sleator, and R.E. Tarjan. Algorithmica 1(1):111-129 (1986)] and surpasses it for pure-heap variants of numerous other heaps with augmented data such as pointer rank-pairing heaps.Comment: arXiv admin note: substantial text overlap with arXiv:1302.664
    • …
    corecore