453 research outputs found
Leveraging Discarded Samples for Tighter Estimation of Multiple-Set Aggregates
Many datasets such as market basket data, text or hypertext documents, and
sensor observations recorded in different locations or time periods, are
modeled as a collection of sets over a ground set of keys. We are interested in
basic aggregates such as the weight or selectivity of keys that satisfy some
selection predicate defined over keys' attributes and membership in particular
sets. This general formulation includes basic aggregates such as the Jaccard
coefficient, Hamming distance, and association rules.
On massive data sets, exact computation can be inefficient or infeasible.
Sketches based on coordinated random samples are classic summaries that support
approximate query processing.
Queries are resolved by generating a sketch (sample) of the union of sets
used in the predicate from the sketches these sets and then applying an
estimator to this union-sketch.
We derive novel tighter (unbiased) estimators that leverage sampled keys that
are present in the union of applicable sketches but excluded from the union
sketch. We establish analytically that our estimators dominate estimators
applied to the union-sketch for {\em all queries and data sets}. Empirical
evaluation on synthetic and real data reveals that on typical applications we
can expect a 25%-4 fold reduction in estimation error.Comment: 16 page
What you can do with Coordinated Samples
Sample coordination, where similar instances have similar samples, was
proposed by statisticians four decades ago as a way to maximize overlap in
repeated surveys. Coordinated sampling had been since used for summarizing
massive data sets.
The usefulness of a sampling scheme hinges on the scope and accuracy within
which queries posed over the original data can be answered from the sample. We
aim here to gain a fundamental understanding of the limits and potential of
coordination. Our main result is a precise characterization, in terms of simple
properties of the estimated function, of queries for which estimators with
desirable properties exist. We consider unbiasedness, nonnegativity, finite
variance, and bounded estimates.
Since generally a single estimator can not be optimal (minimize variance
simultaneously) for all data, we propose {\em variance competitiveness}, which
means that the expectation of the square on any data is not too far from the
minimum one possible for the data. Surprisingly perhaps, we show how to
construct, for any function for which an unbiased nonnegative estimator exists,
a variance competitive estimator.Comment: 4 figures, 21 pages, Extended Abstract appeared in RANDOM 201
Get the Most out of Your Sample: Optimal Unbiased Estimators using Partial Information
Random sampling is an essential tool in the processing and transmission of
data. It is used to summarize data too large to store or manipulate and meet
resource constraints on bandwidth or battery power. Estimators that are applied
to the sample facilitate fast approximate processing of queries posed over the
original data and the value of the sample hinges on the quality of these
estimators.
Our work targets data sets such as request and traffic logs and sensor
measurements, where data is repeatedly collected over multiple {\em instances}:
time periods, locations, or snapshots.
We are interested in queries that span multiple instances, such as distinct
counts and distance measures over selected records. These queries are used for
applications ranging from planning to anomaly and change detection.
Unbiased low-variance estimators are particularly effective as the relative
error decreases with the number of selected record keys.
The Horvitz-Thompson estimator, known to minimize variance for sampling with
"all or nothing" outcomes (which reveals exacts value or no information on
estimated quantity), is not optimal for multi-instance operations for which an
outcome may provide partial information.
We present a general principled methodology for the derivation of (Pareto)
optimal unbiased estimators over sampled instances and aim to understand its
potential. We demonstrate significant improvement in estimate accuracy of
fundamental queries for common sampling schemes.Comment: This is a full version of a PODS 2011 pape
Estimation for Monotone Sampling: Competitiveness and Customization
Random samples are lossy summaries which allow queries posed over the data to
be approximated by applying an appropriate estimator to the sample. The
effectiveness of sampling, however, hinges on estimator selection. The choice
of estimators is subjected to global requirements, such as unbiasedness and
range restrictions on the estimate value, and ideally, we seek estimators that
are both efficient to derive and apply and {\em admissible} (not dominated, in
terms of variance, by other estimators). Nevertheless, for a given data domain,
sampling scheme, and query, there are many admissible estimators. We study the
choice of admissible nonnegative and unbiased estimators for monotone sampling
schemes. Monotone sampling schemes are implicit in many applications of massive
data set analysis. Our main contribution is general derivations of admissible
estimators with desirable properties. We present a construction of {\em
order-optimal} estimators, which minimize variance according to {\em any}
specified priorities over the data domain. Order-optimality allows us to
customize the derivation to common patterns that we can learn or observe in the
data. When we prioritize lower values (e.g., more similar data sets when
estimating difference), we obtain the L estimator, which is the unique
monotone admissible estimator. We show that the L estimator is
4-competitive and dominates the classic Horvitz-Thompson estimator. These
properties make the L estimator a natural default choice. We also present
the U estimator, which prioritizes large values (e.g., less similar data
sets). Our estimator constructions are both easy to apply and possess desirable
properties, allowing us to make the most from our summarized data.Comment: 28 pages; Improved write up, presentation in the context of the more
general monotone sampling formulation (instead of coordinated sampling).
Bounds on universal ratio removed to make the paper more focused, since it is
mainly of theoretical interes
Sketch-based Influence Maximization and Computation: Scaling up with Guarantees
Propagation of contagion through networks is a fundamental process. It is
used to model the spread of information, influence, or a viral infection.
Diffusion patterns can be specified by a probabilistic model, such as
Independent Cascade (IC), or captured by a set of representative traces.
Basic computational problems in the study of diffusion are influence queries
(determining the potency of a specified seed set of nodes) and Influence
Maximization (identifying the most influential seed set of a given size).
Answering each influence query involves many edge traversals, and does not
scale when there are many queries on very large graphs. The gold standard for
Influence Maximization is the greedy algorithm, which iteratively adds to the
seed set a node maximizing the marginal gain in influence. Greedy has a
guaranteed approximation ratio of at least (1-1/e) and actually produces a
sequence of nodes, with each prefix having approximation guarantee with respect
to the same-size optimum. Since Greedy does not scale well beyond a few million
edges, for larger inputs one must currently use either heuristics or
alternative algorithms designed for a pre-specified small seed set size.
We develop a novel sketch-based design for influence computation. Our greedy
Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with
billions of edges, with one to two orders of magnitude speedup over the best
greedy methods. It still has a guaranteed approximation ratio, and in practice
its quality nearly matches that of exact greedy. We also present influence
oracles, which use linear-time preprocessing to generate a small sketch for
each node, allowing the influence of any seed set to be quickly answered from
the sketches of its nodes.Comment: 10 pages, 5 figures. Appeared at the 23rd Conference on Information
and Knowledge Management (CIKM 2014) in Shanghai, Chin
- β¦