116,464 research outputs found
Weighted Reservoir Sampling from Distributed Streams
We consider message-efficient continuous random sampling from a distributed
stream, where the probability of inclusion of an item in the sample is
proportional to a weight associated with the item. The unweighted version,
where all weights are equal, is well studied, and admits tight upper and lower
bounds on message complexity. For weighted sampling with replacement, there is
a simple reduction to unweighted sampling with replacement. However, in many
applications the stream has only a few heavy items which may dominate a random
sample when chosen with replacement. Weighted sampling \textit{without
replacement} (weighted SWOR) eludes this issue, since such heavy items can be
sampled at most once.
In this work, we present the first message-optimal algorithm for weighted
SWOR from a distributed stream. Our algorithm also has optimal space and time
complexity. As an application of our algorithm for weighted SWOR, we derive the
first distributed streaming algorithms for tracking \textit{heavy hitters with
residual error}. Here the goal is to identify stream items that contribute
significantly to the residual stream, once the heaviest items are removed.
Residual heavy hitters generalize the notion of heavy hitters and are
important in streams that have a skewed distribution of weights. In addition to
the upper bound, we also provide a lower bound on the message complexity that
is nearly tight up to a factor. Finally, we use our weighted
sampling algorithm to improve the message complexity of distributed
tracking, also known as count tracking, which is a widely studied problem in
distributed streaming. We also derive a tight message lower bound, which closes
the message complexity of this fundamental problem.Comment: To appear in PODS 201
Boosting the Basic Counting on Distributed Streams
We revisit the classic basic counting problem in the distributed streaming
model that was studied by Gibbons and Tirthapura (GT). In the solution for
maintaining an -estimate, as what GT's method does, we make
the following new contributions: (1) For a bit stream of size , where each
bit has a probability at least to be 1, we exponentially reduced the
average total processing time from GT's to
, thus providing the first
sublinear-time streaming algorithm for this problem. (2) In addition to an
overall much faster processing speed, our method provides a new tradeoff that a
lower accuracy demand (a larger value for ) promises a faster
processing speed, whereas GT's processing speed is
in any case and for any . (3) The worst-case total time cost of our
method matches GT's , which is necessary but rarely
occurs in our method. (4) The space usage overhead in our method is a lower
order term compared with GT's space usage and occurs only times
during the stream processing and is too negligible to be detected by the
operating system in practice. We further validate these solid theoretical
results with experiments on both real-world and synthetic data, showing that
our method is faster than GT's by a factor of several to several thousands
depending on the stream size and accuracy demands, without any detectable space
usage overhead. Our method is based on a faster sampling technique that we
design for boosting GT's method and we believe this technique can be of other
interest.Comment: 32 page
Stream Sampling for Frequency Cap Statistics
Unaggregated data, in streamed or distributed form, is prevalent and come
from diverse application domains which include interactions of users with web
services and IP traffic. Data elements have {\em keys} (cookies, users,
queries) and elements with different keys interleave. Analytics on such data
typically utilizes statistics stated in terms of the frequencies of keys. The
two most common statistics are {\em distinct}, which is the number of active
keys in a specified segment, and {\em sum}, which is the sum of the frequencies
of keys in the segment. Both are special cases of {\em cap} statistics, defined
as the sum of frequencies {\em capped} by a parameter , which are popular in
online advertising platforms. Aggregation by key, however, is costly, requiring
state proportional to the number of distinct keys, and therefore we are
interested in estimating these statistics or more generally, sampling the data,
without aggregation. We present a sampling framework for unaggregated data that
uses a single pass (for streams) or two passes (for distributed data) and state
proportional to the desired sample size. Our design provides the first
effective solution for general frequency cap statistics. Our -capped
samples provide estimates with tight statistical guarantees for cap statistics
with and nonnegative unbiased estimates of {\em any} monotone
non-decreasing frequency statistics. An added benefit of our unified design is
facilitating {\em multi-objective samples}, which provide estimates with
statistical guarantees for a specified set of different statistics, using a
single, smaller sample.Comment: 21 pages, 4 figures, preliminary version will appear in KDD 201
- …