174 research outputs found
Efficient Management of Short-Lived Data
Motivated by the increasing prominence of loosely-coupled systems, such as
mobile and sensor networks, which are characterised by intermittent
connectivity and volatile data, we study the tagging of data with so-called
expiration times. More specifically, when data are inserted into a database,
they may be tagged with time values indicating when they expire, i.e., when
they are regarded as stale or invalid and thus are no longer considered part of
the database. In a number of applications, expiration times are known and can
be assigned at insertion time. We present data structures and algorithms for
online management of data tagged with expiration times. The algorithms are
based on fully functional, persistent treaps, which are a combination of binary
search trees with respect to a primary attribute and heaps with respect to a
secondary attribute. The primary attribute implements primary keys, and the
secondary attribute stores expiration times in a minimum heap, thus keeping a
priority queue of tuples to expire. A detailed and comprehensive experimental
study demonstrates the well-behavedness and scalability of the approach as well
as its efficiency with respect to a number of competitors.Comment: switched to TimeCenter latex styl
Top-Down Skiplists
We describe todolists (top-down skiplists), a variant of skiplists (Pugh
1990) that can execute searches using at most
binary comparisons per search and that have amortized update time
. A variant of todolists, called working-todolists,
can execute a search for any element using binary comparisons and have amortized search time
. Here, is the "working-set number" of
. No previous data structure is known to achieve a bound better than
comparisons. We show through experiments that, if implemented
carefully, todolists are comparable to other common dictionary implementations
in terms of insertion times and outperform them in terms of search times.Comment: 18 pages, 5 figure
Learning-Augmented B-Trees
We study learning-augmented binary search trees (BSTs) and B-Trees via Treaps
with composite priorities. The result is a simple search tree where the depth
of each item is determined by its predicted weight . To achieve the
result, each item has its composite priority
where is the uniform
random variable. This generalizes the recent learning-augmented BSTs
[Lin-Luo-Woodruff ICML`22], which only work for Zipfian distributions, to
arbitrary inputs and predictions. It also gives the first B-Tree data structure
that can provably take advantage of localities in the access sequence via
online self-reorganization. The data structure is robust to prediction errors
and handles insertions, deletions, as well as prediction updates.Comment: 25 page
PERFORMANCE APPRAISAL OF TREAP AND HEAP SORT ALGORITHMS
The task of storing items to allow for fast access to an item given its key is an ubiquitous problem in many organizations. Treap as a method uses key and priority for searching in databases. When the keys are drawn from a large totally ordered set, the choice of storing the items is usually some sort of search tree. The simplest form of such tree is a binary search tree. In this tree, a set X of n items is stored at the nodes of a rooted binary tree in which some item y ϵ X is chosen to be stored at the root of the tree. Heap as data structure is an array object that can be viewed as a nearly complete binary tree in which each node of the tree corresponds to an element of the array that stores the value in the node. Both algorithms were subjected to sorting under the same experimental environment and conditions. This was implemented by means of threads which call each of the two methods simultaneously. The server keeps records of individual search time which was the basis of the comparison. It was discovered that treap was faster than heap sort in sorting and searching for elements using systems with homogenous properties.
 
Invariant Synthesis for Incomplete Verification Engines
We propose a framework for synthesizing inductive invariants for incomplete
verification engines, which soundly reduce logical problems in undecidable
theories to decidable theories. Our framework is based on the counter-example
guided inductive synthesis principle (CEGIS) and allows verification engines to
communicate non-provability information to guide invariant synthesis. We show
precisely how the verification engine can compute such non-provability
information and how to build effective learning algorithms when invariants are
expressed as Boolean combinations of a fixed set of predicates. Moreover, we
evaluate our framework in two verification settings, one in which verification
engines need to handle quantified formulas and one in which verification
engines have to reason about heap properties expressed in an expressive but
undecidable separation logic. Our experiments show that our invariant synthesis
framework based on non-provability information can both effectively synthesize
inductive invariants and adequately strengthen contracts across a large suite
of programs
Range-Based Set Reconciliation and Authenticated Set Representations
Range-based set reconciliation is a simple approach to efficiently computing
the union of two sets over a network, based on recursively partitioning the
sets and comparing fingerprints of the partitions to probabilistically detect
whether a partition requires further work. Whereas prior presentations of this
approach focus on specific fingerprinting schemes for specific use-cases, we
give a more generic description and analysis in the broader context of set
reconciliation. Precisely capturing the design space for fingerprinting schemes
allows us to survey for cryptographically secure schemes. Furthermore, we
reduce the time complexity of local computations by a logarithmic factor
compared to previous publications. In investigating secure associative hash
functions, we open up a new class of tree-based authenticated data structures
which need not prescribe a deterministic balancing scheme
- …