67 research outputs found
A Memory-Efficient Sketch Method for Estimating High Similarities in Streaming Sets
Estimating set similarity and detecting highly similar sets are fundamental
problems in areas such as databases, machine learning, and information
retrieval. MinHash is a well-known technique for approximating Jaccard
similarity of sets and has been successfully used for many applications such as
similarity search and large scale learning. Its two compressed versions, b-bit
MinHash and Odd Sketch, can significantly reduce the memory usage of the
original MinHash method, especially for estimating high similarities (i.e.,
similarities around 1). Although MinHash can be applied to static sets as well
as streaming sets, of which elements are given in a streaming fashion and
cardinality is unknown or even infinite, unfortunately, b-bit MinHash and Odd
Sketch fail to deal with streaming data. To solve this problem, we design a
memory efficient sketch method, MaxLogHash, to accurately estimate Jaccard
similarities in streaming sets. Compared to MinHash, our method uses smaller
sized registers (each register consists of less than 7 bits) to build a compact
sketch for each set. We also provide a simple yet accurate estimator for
inferring Jaccard similarity from MaxLogHash sketches. In addition, we derive
formulas for bounding the estimation error and determine the smallest necessary
memory usage (i.e., the number of registers used for a MaxLogHash sketch) for
the desired accuracy. We conduct experiments on a variety of datasets, and
experimental results show that our method MaxLogHash is about 5 times more
memory efficient than MinHash with the same accuracy and computational cost for
estimating high similarities
BagMinHash - Minwise Hashing Algorithm for Weighted Sets
Minwise hashing has become a standard tool to calculate signatures which
allow direct estimation of Jaccard similarities. While very efficient
algorithms already exist for the unweighted case, the calculation of signatures
for weighted sets is still a time consuming task. BagMinHash is a new algorithm
that can be orders of magnitude faster than current state of the art without
any particular restrictions or assumptions on weights or data dimensionality.
Applied to the special case of unweighted sets, it represents the first
efficient algorithm producing independent signature components. A series of
tests finally verifies the new algorithm and also reveals limitations of other
approaches published in the recent past.Comment: 10 pages, KDD 201
Consistent Weighted Sampling Made Fast, Small, and Easy
Document sketching using Jaccard similarity has been a workable effective
technique in reducing near-duplicates in Web page and image search results, and
has also proven useful in file system synchronization, compression and learning
applications.
Min-wise sampling can be used to derive an unbiased estimator for Jaccard
similarity and taking a few hundred independent consistent samples leads to
compact sketches which provide good estimates of pairwise-similarity.
Subsequent works extended this technique to weighted sets and show how to
produce samples with only a constant number of hash evaluations for any
element, independent of its weight. Another improvement by Li et al. shows how
to speedup sketch computations by computing many (near-)independent samples in
one shot. Unfortunately this latter improvement works only for the unweighted
case.
In this paper we give a simple, fast and accurate procedure which reduces
weighted sets to unweighted sets with small impact on the Jaccard similarity.
This leads to compact sketches consisting of many (near-)independent weighted
samples which can be computed with just a small constant number of hash
function evaluations per weighted element. The size of the produced unweighted
set is furthermore a tunable parameter which enables us to run the unweighted
scheme of Li et al. in the regime where it is most efficient. Even when the
sets involved are unweighted, our approach gives a simple solution to the
densification problem that other works attempted to address.
Unlike previously known schemes, ours does not result in an unbiased
estimator. However, we prove that the bias introduced by our reduction is
negligible and that the standard deviation is comparable to the unweighted
case. We also empirically evaluate our scheme and show that it gives
significant gains in computational efficiency, without any measurable loss in
accuracy
Fast Locality-Sensitive Hashing Frameworks for Approximate Near Neighbor Search
The Indyk-Motwani Locality-Sensitive Hashing (LSH) framework (STOC 1998) is a
general technique for constructing a data structure to answer approximate near
neighbor queries by using a distribution over locality-sensitive
hash functions that partition space. For a collection of points, after
preprocessing, the query time is dominated by evaluations
of hash functions from and hash table lookups and
distance computations where is determined by the
locality-sensitivity properties of . It follows from a recent
result by Dahlgaard et al. (FOCS 2017) that the number of locality-sensitive
hash functions can be reduced to , leaving the query time to be
dominated by distance computations and
additional word-RAM operations. We state this result as a general framework and
provide a simpler analysis showing that the number of lookups and distance
computations closely match the Indyk-Motwani framework, making it a viable
replacement in practice. Using ideas from another locality-sensitive hashing
framework by Andoni and Indyk (SODA 2006) we are able to reduce the number of
additional word-RAM operations to .Comment: 15 pages, 3 figure
- …