13 research outputs found
Tree-Independent Dual-Tree Algorithms
Dual-tree algorithms are a widely used class of branch-and-bound algorithms.
Unfortunately, developing dual-tree algorithms for use with different trees and
problems is often complex and burdensome. We introduce a four-part logical
split: the tree, the traversal, the point-to-point base case, and the pruning
rule. We provide a meta-algorithm which allows development of dual-tree
algorithms in a tree-independent manner and easy extension to entirely new
types of trees. Representations are provided for five common algorithms; for
k-nearest neighbor search, this leads to a novel, tighter pruning bound. The
meta-algorithm also allows straightforward extensions to massively parallel
settings.Comment: accepted in ICML 201
Single-tree GMM training
Research area: Machine learnin
Improving Dual-Encoder Training through Dynamic Indexes for Negative Mining
Dual encoder models are ubiquitous in modern classification and retrieval.
Crucial for training such dual encoders is an accurate estimation of gradients
from the partition function of the softmax over the large output space; this
requires finding negative targets that contribute most significantly ("hard
negatives"). Since dual encoder model parameters change during training, the
use of traditional static nearest neighbor indexes can be sub-optimal. These
static indexes (1) periodically require expensive re-building of the index,
which in turn requires (2) expensive re-encoding of all targets using updated
model parameters. This paper addresses both of these challenges. First, we
introduce an algorithm that uses a tree structure to approximate the softmax
with provable bounds and that dynamically maintains the tree. Second, we
approximate the effect of a gradient update on target encodings with an
efficient Nystrom low-rank approximation. In our empirical study on datasets
with over twenty million targets, our approach cuts error by half in relation
to oracle brute-force negative mining. Furthermore, our method surpasses prior
state-of-the-art while using 150x less accelerator memory.Comment: To appear at AISTATS 202
Faster Cover Trees
Abstract The cover tree data structure speeds up exact nearest neighbor queries over arbitrary metric spaces On standard benchmark datasets, we reduce the number of distance computations by 10-50%. On a large-scale bioinformatics dataset, we reduce the number of distance computations by 71%. On a large-scale image dataset, our parallel algorithm with 16 cores reduces tree construction time from 3.5 hours to 12 minutes