196 research outputs found
Efficient Discovery of Association Rules and Frequent Itemsets through Sampling with Tight Performance Guarantees
The tasks of extracting (top-) Frequent Itemsets (FI's) and Association
Rules (AR's) are fundamental primitives in data mining and database
applications. Exact algorithms for these problems exist and are widely used,
but their running time is hindered by the need of scanning the entire dataset,
possibly multiple times. High quality approximations of FI's and AR's are
sufficient for most practical uses, and a number of recent works explored the
application of sampling for fast discovery of approximate solutions to the
problems. However, these works do not provide satisfactory performance
guarantees on the quality of the approximation, due to the difficulty of
bounding the probability of under- or over-sampling any one of an unknown
number of frequent itemsets. In this work we circumvent this issue by applying
the statistical concept of \emph{Vapnik-Chervonenkis (VC) dimension} to develop
a novel technique for providing tight bounds on the sample size that guarantees
approximation within user-specified parameters. Our technique applies both to
absolute and to relative approximations of (top-) FI's and AR's. The
resulting sample size is linearly dependent on the VC-dimension of a range
space associated with the dataset to be mined. The main theoretical
contribution of this work is a proof that the VC-dimension of this range space
is upper bounded by an easy-to-compute characteristic quantity of the dataset
which we call \emph{d-index}, and is the maximum integer such that the
dataset contains at least transactions of length at least such that no
one of them is a superset of or equal to another. We show that this bound is
strict for a large class of datasets.Comment: 19 pages, 7 figures. A shorter version of this paper appeared in the
proceedings of ECML PKDD 201
Steady state analysis of balanced-allocation routing
We compare the long-term, steady-state performance of a variant of the
standard Dynamic Alternative Routing (DAR) technique commonly used in telephone
and ATM networks, to the performance of a path-selection algorithm based on the
"balanced-allocation" principle; we refer to this new algorithm as the Balanced
Dynamic Alternative Routing (BDAR) algorithm. While DAR checks alternative
routes sequentially until available bandwidth is found, the BDAR algorithm
compares and chooses the best among a small number of alternatives.
We show that, at the expense of a minor increase in routing overhead, the
BDAR algorithm gives a substantial improvement in network performance, in terms
both of network congestion and of bandwidth requirement.Comment: 22 pages, 1 figur
An Adaptive Algorithm for Learning with Unknown Distribution Drift
We develop and analyze a general technique for learning with an unknown
distribution drift. Given a sequence of independent observations from the last
steps of a drifting distribution, our algorithm agnostically learns a
family of functions with respect to the current distribution at time .
Unlike previous work, our technique does not require prior knowledge about the
magnitude of the drift. Instead, the algorithm adapts to the sample data.
Without explicitly estimating the drift, the algorithm learns a family of
functions with almost the same error as a learning algorithm that knows the
magnitude of the drift in advance. Furthermore, since our algorithm adapts to
the data, it can guarantee a better learning error than an algorithm that
relies on loose bounds on the drift.Comment: Fixed typos and references. Updated conclusio
Nonparametric Density Estimation under Distribution Drift
We study nonparametric density estimation in non-stationary drift settings.
Given a sequence of independent samples taken from a distribution that
gradually changes in time, the goal is to compute the best estimate for the
current distribution. We prove tight minimax risk bounds for both discrete and
continuous smooth densities, where the minimum is over all possible estimates
and the maximum is over all possible distributions that satisfy the drift
constraints. Our technique handles a broad class of drift models, and
generalizes previous results on agnostic learning under drift.Comment: Camera Ready versio
- …