2 research outputs found
Stream Sampling for Frequency Cap Statistics
Unaggregated data, in streamed or distributed form, is prevalent and come
from diverse application domains which include interactions of users with web
services and IP traffic. Data elements have {\em keys} (cookies, users,
queries) and elements with different keys interleave. Analytics on such data
typically utilizes statistics stated in terms of the frequencies of keys. The
two most common statistics are {\em distinct}, which is the number of active
keys in a specified segment, and {\em sum}, which is the sum of the frequencies
of keys in the segment. Both are special cases of {\em cap} statistics, defined
as the sum of frequencies {\em capped} by a parameter , which are popular in
online advertising platforms. Aggregation by key, however, is costly, requiring
state proportional to the number of distinct keys, and therefore we are
interested in estimating these statistics or more generally, sampling the data,
without aggregation. We present a sampling framework for unaggregated data that
uses a single pass (for streams) or two passes (for distributed data) and state
proportional to the desired sample size. Our design provides the first
effective solution for general frequency cap statistics. Our -capped
samples provide estimates with tight statistical guarantees for cap statistics
with and nonnegative unbiased estimates of {\em any} monotone
non-decreasing frequency statistics. An added benefit of our unified design is
facilitating {\em multi-objective samples}, which provide estimates with
statistical guarantees for a specified set of different statistics, using a
single, smaller sample.Comment: 21 pages, 4 figures, preliminary version will appear in KDD 201
Electronic Colloquium on Computational Complexity, Report No. 1 (2005) Near optimality of the priority sampling procedure
Based on experimental results N. Duffield, C. Lund and M. Thorup [DLT2] conjectured that the variance of their highly successful priority sampling procedure is not larger than the variance of the threshold sampling procedure with sample size one smaller. The conjecture’s significance is that the latter procedure is provably optimal among all off-line sampling procedures. Here we prove this conjecture. In particular, our result gives an affirmative answer to the conjecture of N. Alon, N. Duffield, C. Lund and M. Thorup [ADLT], which states that the standard deviation for the subset sum estimator obtained from k priority samples is upper bounded by W / √ k − 1. (W is the actual subset sum.)