20,613 research outputs found
Soft Sequence Heaps
Chazelle [JACM00] introduced the soft heap as a building block for efficient
minimum spanning tree algorithms, and recently Kaplan et al. [SOSA2019] showed
how soft heaps can be applied to achieve simpler algorithms for various
selection problems. A soft heap trades-off accuracy for efficiency, by allowing
of the items in a heap to be corrupted after a total of
insertions, where a corrupted item is an item with artificially increased key
and is a fixed error parameter. Chazelle's soft heaps
are based on binomial trees and support insertions in amortized
time and extract-min operations in amortized time.
In this paper we explore the design space of soft heaps. The main
contribution of this paper is an alternative soft heap implementation based on
merging sorted sequences, with time bounds matching those of Chazelle's soft
heaps. We also discuss a variation of the soft heap by Kaplan et al.
[SICOMP2013], where we avoid performing insertions lazily. It is based on
ternary trees instead of binary trees and matches the time bounds of Kaplan et
al., i.e. amortized insertions and amortized
extract-min. Both our data structures only introduce corruptions after
extract-min operations which return the set of items corrupted by the
operation.Comment: 16 pages, 3 figure
An In-Place Sorting with O(n log n) Comparisons and O(n) Moves
We present the first in-place algorithm for sorting an array of size n that
performs, in the worst case, at most O(n log n) element comparisons and O(n)
element transports.
This solves a long-standing open problem, stated explicitly, e.g., in [J.I.
Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13,
374-93, 1992], of whether there exists a sorting algorithm that matches the
asymptotic lower bounds on all computational resources simultaneously
Summary-based inference of quantitative bounds of live heap objects
This article presents a symbolic static analysis for computing parametric upper bounds of the number of simultaneously live objects of sequential Java-like programs. Inferring the peak amount of irreclaimable objects is the cornerstone for analyzing potential heap-memory consumption of stand-alone applications or libraries. The analysis builds method-level summaries quantifying the peak number of live objects and the number of escaping objects. Summaries are built by resorting to summaries of their callees. The usability, scalability and precision of the technique is validated by successfully predicting the object heap usage of a medium-size, real-life application which is significantly larger than other previously reported case-studies.Fil: Braberman, Victor Adrian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Garbervetsky, Diego David. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Hym, Samuel. Universite Lille 3; FranciaFil: Yovine, Sergio Fabian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin
New Algorithms for Position Heaps
We present several results about position heaps, a relatively new alternative
to suffix trees and suffix arrays. First, we show that, if we limit the maximum
length of patterns to be sought, then we can also limit the height of the heap
and reduce the worst-case cost of insertions and deletions. Second, we show how
to build a position heap in linear time independent of the size of the
alphabet. Third, we show how to augment a position heap such that it supports
access to the corresponding suffix array, and vice versa. Fourth, we introduce
a variant of a position heap that can be simulated efficiently by a compressed
suffix array with a linear number of extra bits
Strengthened Lazy Heaps: Surpassing the Lower Bounds for Binary Heaps
Let denote the number of elements currently in a data structure. An
in-place heap is stored in the first locations of an array, uses
extra space, and supports the operations: minimum, insert, and extract-min. We
introduce an in-place heap, for which minimum and insert take worst-case
time, and extract-min takes worst-case time and involves at most
element comparisons. The achieved bounds are optimal to within
additive constant terms for the number of element comparisons. In particular,
these bounds for both insert and extract-min -and the time bound for insert-
surpass the corresponding lower bounds known for binary heaps, though our data
structure is similar. In a binary heap, when viewed as a nearly complete binary
tree, every node other than the root obeys the heap property, i.e. the element
at a node is not smaller than that at its parent. To surpass the lower bound
for extract-min, we reinforce a stronger property at the bottom levels of the
heap that the element at any right child is not smaller than that at its left
sibling. To surpass the lower bound for insert, we buffer insertions and allow
nodes to violate heap order in relation to their parents
Smooth heaps and a dual view of self-adjusting data structures
We present a new connection between self-adjusting binary search trees (BSTs)
and heaps, two fundamental, extensively studied, and practically relevant
families of data structures. Roughly speaking, we map an arbitrary heap
algorithm within a natural model, to a corresponding BST algorithm with the
same cost on a dual sequence of operations (i.e. the same sequence with the
roles of time and key-space switched). This is the first general transformation
between the two families of data structures.
There is a rich theory of dynamic optimality for BSTs (i.e. the theory of
competitiveness between BST algorithms). The lack of an analogous theory for
heaps has been noted in the literature. Through our connection, we transfer all
instance-specific lower bounds known for BSTs to a general model of heaps,
initiating a theory of dynamic optimality for heaps.
On the algorithmic side, we obtain a new, simple and efficient heap
algorithm, which we call the smooth heap. We show the smooth heap to be the
heap-counterpart of Greedy, the BST algorithm with the strongest proven and
conjectured properties from the literature, widely believed to be
instance-optimal. Assuming the optimality of Greedy, the smooth heap is also
optimal within our model of heap algorithms. As corollaries of results known
for Greedy, we obtain instance-specific upper bounds for the smooth heap, with
applications in adaptive sorting.
Intriguingly, the smooth heap, although derived from a non-practical BST
algorithm, is simple and easy to implement (e.g. it stores no auxiliary data
besides the keys and tree pointers). It can be seen as a variation on the
popular pairing heap data structure, extending it with a "power-of-two-choices"
type of heuristic.Comment: Presented at STOC 2018, light revision, additional figure
The Meaning of Memory Safety
We give a rigorous characterization of what it means for a programming
language to be memory safe, capturing the intuition that memory safety supports
local reasoning about state. We formalize this principle in two ways. First, we
show how a small memory-safe language validates a noninterference property: a
program can neither affect nor be affected by unreachable parts of the state.
Second, we extend separation logic, a proof system for heap-manipulating
programs, with a memory-safe variant of its frame rule. The new rule is
stronger because it applies even when parts of the program are buggy or
malicious, but also weaker because it demands a stricter form of separation
between parts of the program state. We also consider a number of pragmatically
motivated variations on memory safety and the reasoning principles they
support. As an application of our characterization, we evaluate the security of
a previously proposed dynamic monitor for memory safety of heap-allocated data.Comment: POST'18 final versio
Memory usage verification using Hip/Sleek.
Embedded systems often come with constrained memory footprints. It is therefore essential to ensure that software running on such platforms fulfils memory usage specifications at compile-time, to prevent memory-related software failure after deployment. Previous proposals on memory usage verification are not satisfactory as they usually can only handle restricted subsets of programs, especially when shared mutable data structures are involved. In this paper, we propose a simple but novel solution. We instrument programs with explicit memory operations so that memory usage verification can be done along with the verification of other properties, using an automated verification system Hip/Sleek developed recently by Chin et al.[10,19]. The instrumentation can be done automatically and is proven sound with respect to an underlying semantics. One immediate benefit is that we do not need to develop from scratch a specific system for memory usage verification. Another benefit is that we can verify more programs, especially those involving shared mutable data structures, which previous systems failed to handle, as evidenced by our experimental results
- …