25,679 research outputs found

    Priority Queues with Multiple Time Fingers

    Full text link
    A priority queue is presented that supports the operations insert and find-min in worst-case constant time, and delete and delete-min on element x in worst-case O(lg(min{w_x, q_x}+2)) time, where w_x (respectively q_x) is the number of elements inserted after x (respectively before x) and are still present at the time of the deletion of x. Our priority queue then has both the working-set and the queueish properties, and more strongly it satisfies these properties in the worst-case sense. We also define a new distribution-sensitive property---the time-finger property, which encapsulates and generalizes both the working-set and queueish properties, and present a priority queue that satisfies this property. In addition, we prove a strong implication that the working-set property is equivalent to the unified bound (which is the minimum per operation among the static finger, static optimality, and the working-set bounds). This latter result is of tremendous interest by itself as it had gone unnoticed since the introduction of such bounds by Sleater and Tarjan [JACM 1985]. Accordingly, our priority queue satisfies other distribution-sensitive properties as the static finger, static optimality, and the unified bound.Comment: 14 pages, 4 figure

    Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest

    Full text link
    The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al. published a principle of the watershed computation by means of an Image Foresting Transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m+C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of the IFT algorithm. All presented implementations keep the time complexity of the standard priority queue with buckets but the best one minimizes the costly memory allocation and needs only 19-45% of memory for typical 3D medical imaging datasets. Memory saving was reached by an IFT algorithm simplification, which stores more elements in temporary structures but these elements are simpler and thus need less memory. The best presented modification allows segmentation of large 3D medical datasets (up to 512x512x680 voxels) with 12-or 16-bits per voxel on currently available PC based workstations.Comment: v1: 10 pages, 6 figures, 7 tables EUROGRAPHICS conference, Manchester, UK, 2001. v2: 12 pages, reformated for letter, corrected IFT to "Image Foresting Tranform

    Strengthened Lazy Heaps: Surpassing the Lower Bounds for Binary Heaps

    Full text link
    Let nn denote the number of elements currently in a data structure. An in-place heap is stored in the first nn locations of an array, uses O(1)O(1) extra space, and supports the operations: minimum, insert, and extract-min. We introduce an in-place heap, for which minimum and insert take O(1)O(1) worst-case time, and extract-min takes O(lgn)O(\lg{} n) worst-case time and involves at most lgn+O(1)\lg{} n + O(1) element comparisons. The achieved bounds are optimal to within additive constant terms for the number of element comparisons. In particular, these bounds for both insert and extract-min -and the time bound for insert- surpass the corresponding lower bounds known for binary heaps, though our data structure is similar. In a binary heap, when viewed as a nearly complete binary tree, every node other than the root obeys the heap property, i.e. the element at a node is not smaller than that at its parent. To surpass the lower bound for extract-min, we reinforce a stronger property at the bottom levels of the heap that the element at any right child is not smaller than that at its left sibling. To surpass the lower bound for insert, we buffer insertions and allow O(lg2n)O(\lg^2{} n) nodes to violate heap order in relation to their parents

    Memory-Adjustable Navigation Piles with Applications to Sorting and Convex Hulls

    Get PDF
    We consider space-bounded computations on a random-access machine (RAM) where the input is given on a read-only random-access medium, the output is to be produced to a write-only sequential-access medium, and the available workspace allows random reads and writes but is of limited capacity. The length of the input is NN elements, the length of the output is limited by the computation, and the capacity of the workspace is O(S)O(S) bits for some predetermined parameter SS. We present a state-of-the-art priority queue---called an adjustable navigation pile---for this restricted RAM model. Under some reasonable assumptions, our priority queue supports minimum\mathit{minimum} and insert\mathit{insert} in O(1)O(1) worst-case time and extract\mathit{extract} in O(N/S+lgS)O(N/S + \lg{} S) worst-case time for any SlgNS \geq \lg{} N. We show how to use this data structure to sort NN elements and to compute the convex hull of NN points in the two-dimensional Euclidean space in O(N2/S+NlgS)O(N^2/S + N \lg{} S) worst-case time for any SlgNS \geq \lg{} N. Following a known lower bound for the space-time product of any branching program for finding unique elements, both our sorting and convex-hull algorithms are optimal. The adjustable navigation pile has turned out to be useful when designing other space-efficient algorithms, and we expect that it will find its way to yet other applications.Comment: 21 page
    corecore