1 research outputs found
Scaling Active Search using Linear Similarity Functions
Active Search has become an increasingly useful tool in information retrieval
problems where the goal is to discover as many target elements as possible
using only limited label queries. With the advent of big data, there is a
growing emphasis on the scalability of such techniques to handle very large and
very complex datasets.
In this paper, we consider the problem of Active Search where we are given a
similarity function between data points. We look at an algorithm introduced by
Wang et al. [2013] for Active Search over graphs and propose crucial
modifications which allow it to scale significantly. Their approach selects
points by minimizing an energy function over the graph induced by the
similarity function on the data. Our modifications require the similarity
function to be a dot-product between feature vectors of data points, equivalent
to having a linear kernel for the adjacency matrix. With this, we are able to
scale tremendously: for data points, the original algorithm runs in
time per iteration while ours runs in only given
-dimensional features.
We also describe a simple alternate approach using a weighted-neighbor
predictor which also scales well. In our experiments, we show that our method
is competitive with existing semi-supervised approaches. We also briefly
discuss conditions under which our algorithm performs well.Comment: To be published as conference paper at IJCAI 2017, 11 pages, 2
figure