1,159 research outputs found
Tree-Independent Dual-Tree Algorithms
Dual-tree algorithms are a widely used class of branch-and-bound algorithms.
Unfortunately, developing dual-tree algorithms for use with different trees and
problems is often complex and burdensome. We introduce a four-part logical
split: the tree, the traversal, the point-to-point base case, and the pruning
rule. We provide a meta-algorithm which allows development of dual-tree
algorithms in a tree-independent manner and easy extension to entirely new
types of trees. Representations are provided for five common algorithms; for
k-nearest neighbor search, this leads to a novel, tighter pruning bound. The
meta-algorithm also allows straightforward extensions to massively parallel
settings.Comment: accepted in ICML 201
Fast Algorithms and Efficient Statistics: N-point Correlation Functions
We present here a new algorithm for the fast computation of N-point
correlation functions in large astronomical data sets. The algorithm is based
on kdtrees which are decorated with cached sufficient statistics thus allowing
for orders of magnitude speed-ups over the naive non-tree-based implementation
of correlation functions. We further discuss the use of controlled
approximations within the computation which allows for further acceleration. In
summary, our algorithm now makes it possible to compute exact, all-pairs,
measurements of the 2, 3 and 4-point correlation functions for cosmological
data sets like the Sloan Digital Sky Survey (SDSS; York et al. 2000) and the
next generation of Cosmic Microwave Background experiments (see Szapudi et al.
2000).Comment: To appear in Proceedings of MPA/MPE/ESO Conference "Mining the Sky",
July 31 - August 4, 2000, Garching, German
Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets
This paper introduces new algorithms and data structures for quick counting
for machine learning datasets. We focus on the counting task of constructing
contingency tables, but our approach is also applicable to counting the number
of records in a dataset that match conjunctive queries. Subject to certain
assumptions, the costs of these operations can be shown to be independent of
the number of records in the dataset and loglinear in the number of non-zero
entries in the contingency table. We provide a very sparse data structure, the
ADtree, to minimize memory use. We provide analytical worst-case bounds for
this structure for several models of data distribution. We empirically
demonstrate that tractably-sized data structures can be produced for large
real-world datasets by (a) using a sparse tree structure that never allocates
memory for counts of zero, (b) never allocating memory for counts that can be
deduced from other counts, and (c) not bothering to expand the tree fully near
its leaves. We show how the ADtree can be used to accelerate Bayes net
structure finding algorithms, rule learning algorithms, and feature selection
algorithms, and we provide a number of empirical results comparing ADtree
methods against traditional direct counting approaches. We also discuss the
possible uses of ADtrees in other machine learning methods, and discuss the
merits of ADtrees in comparison with alternative representations such as
kd-trees, R-trees and Frequent Sets.Comment: See http://www.jair.org/ for any accompanying file
- …