21,068 research outputs found
Explicit deterministic constructions for membership in the bitprobe model
We look at time-space tradeoffs for the static membership problem in the bit-probe model. The problem is to represent a set of size up to n from a universe of size m using a small number of bits so that given an element of the universe, its membership in the set can be determined with as few bit probes to the representation as possible. We show several deterministic upper bounds for the case when the number of bit probes, is small, by explicit constructions, culminating in one that uses o(m) bits of space where membership can be determined with [lg lgn] + 2 adaptive bit probes. We also show two tight lower bounds on space for a restricted two probe adaptive scheme
Error-Correcting Data Structures
We study data structures in the presence of adversarial noise. We want to
encode a given object in a succinct data structure that enables us to
efficiently answer specific queries about the object, even if the data
structure has been corrupted by a constant fraction of errors. This new model
is the common generalization of (static) data structures and locally decodable
error-correcting codes. The main issue is the tradeoff between the space used
by the data structure and the time (number of probes) needed to answer a query
about the encoded object. We prove a number of upper and lower bounds on
various natural error-correcting data structure problems. In particular, we
show that the optimal length of error-correcting data structures for the
Membership problem (where we want to store subsets of size s from a universe of
size n) is closely related to the optimal length of locally decodable codes for
s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of
the STACS 2009 conferenc
Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation
We construct efficient data structures that are resilient against a constant
fraction of adversarial noise. Our model requires that the decoder answers most
queries correctly with high probability and for the remaining queries, the
decoder with high probability either answers correctly or declares "don't
know." Furthermore, if there is no noise on the data structure, it answers all
queries correctly with high probability. Our model is the common generalization
of a model proposed recently by de Wolf and the notion of "relaxed locally
decodable codes" developed in the PCP literature.
We measure the efficiency of a data structure in terms of its length,
measured by the number of bits in its representation, and query-answering time,
measured by the number of bit-probes to the (possibly corrupted)
representation. In this work, we study two data structure problems: membership
and polynomial evaluation. We show that these two problems have constructions
that are simultaneously efficient and error-correcting.Comment: An abridged version of this paper appears in STACS 201
The Quantum Complexity of Set Membership
We study the quantum complexity of the static set membership problem: given a
subset S (|S| \leq n) of a universe of size m (m \gg n), store it as a table of
bits so that queries of the form `Is x \in S?' can be answered. The goal is to
use a small table and yet answer queries using few bitprobes. This problem was
considered recently by Buhrman, Miltersen, Radhakrishnan and Venkatesh, where
lower and upper bounds were shown for this problem in the classical
deterministic and randomized models. In this paper, we formulate this problem
in the "quantum bitprobe model" and show tradeoff results between space and
time.In this model, the storage scheme is classical but the query scheme is
quantum.We show, roughly speaking, that similar lower bounds hold in the
quantum model as in the classical model, which imply that the classical upper
bounds are more or less tight even in the quantum case. Our lower bounds are
proved using linear algebraic techniques.Comment: 19 pages, a preliminary version appeared in FOCS 2000. This is the
journal version, which will appear in Algorithmica (Special issue on Quantum
Computation and Quantum Cryptography). This version corrects some bugs in the
parameters of some theorem
Compressing Sparse Sequences under Local Decodability Constraints
We consider a variable-length source coding problem subject to local
decodability constraints. In particular, we investigate the blocklength scaling
behavior attainable by encodings of -sparse binary sequences, under the
constraint that any source bit can be correctly decoded upon probing at most
codeword bits. We consider both adaptive and non-adaptive access models,
and derive upper and lower bounds that often coincide up to constant factors.
Notably, such a characterization for the fixed-blocklength analog of our
problem remains unknown, despite considerable research over the last three
decades. Connections to communication complexity are also briefly discussed.Comment: 8 pages, 1 figure. First five pages to appear in 2015 International
Symposium on Information Theory. This version contains supplementary materia
Data Structures in Classical and Quantum Computing
This survey summarizes several results about quantum computing related to
(mostly static) data structures. First, we describe classical data structures
for the set membership and the predecessor search problems: Perfect Hash tables
for set membership by Fredman, Koml\'{o}s and Szemer\'{e}di and a data
structure by Beame and Fich for predecessor search. We also prove results about
their space complexity (how many bits are required) and time complexity (how
many bits have to be read to answer a query). After that, we turn our attention
to classical data structures with quantum access. In the quantum access model,
data is stored in classical bits, but they can be accessed in a quantum way: We
may read several bits in superposition for unit cost. We give proofs for lower
bounds in this setting that show that the classical data structures from the
first section are, in some sense, asymptotically optimal - even in the quantum
model. In fact, these proofs are simpler and give stronger results than
previous proofs for the classical model of computation. The lower bound for set
membership was proved by Radhakrishnan, Sen and Venkatesh and the result for
the predecessor problem by Sen and Venkatesh. Finally, we examine fully quantum
data structures. Instead of encoding the data in classical bits, we now encode
it in qubits. We allow any unitary operation or measurement in order to answer
queries. We describe one data structure by de Wolf for the set membership
problem and also a general framework using fully quantum data structures in
quantum walks by Jeffery, Kothari and Magniez
Implementation and Deployment of a Distributed Network Topology Discovery Algorithm
In the past few years, the network measurement community has been interested
in the problem of internet topology discovery using a large number (hundreds or
thousands) of measurement monitors. The standard way to obtain information
about the internet topology is to use the traceroute tool from a small number
of monitors. Recent papers have made the case that increasing the number of
monitors will give a more accurate view of the topology. However, scaling up
the number of monitors is not a trivial process. Duplication of effort close to
the monitors wastes time by reexploring well-known parts of the network, and
close to destinations might appear to be a distributed denial-of-service (DDoS)
attack as the probes converge from a set of sources towards a given
destination. In prior work, authors of this report proposed Doubletree, an
algorithm for cooperative topology discovery, that reduces the load on the
network, i.e., router IP interfaces and end-hosts, while discovering almost as
many nodes and links as standard approaches based on traceroute. This report
presents our open-source and freely downloadable implementation of Doubletree
in a tool we call traceroute@home. We describe the deployment and validation of
traceroute@home on the PlanetLab testbed and we report on the lessons learned
from this experience. We discuss how traceroute@home can be developed further
and discuss ideas for future improvements
- …