25,238 research outputs found
Practical Hash Functions for Similarity Estimation and Dimensionality Reduction
Hashing is a basic tool for dimensionality reduction employed in several
aspects of machine learning. However, the perfomance analysis is often carried
out under the abstract assumption that a truly random unit cost hash function
is used, without concern for which concrete hash function is employed. The
concrete hash function may work fine on sufficiently random input. The question
is if it can be trusted in the real world when faced with more structured
input.
In this paper we focus on two prominent applications of hashing, namely
similarity estimation with the one permutation hashing (OPH) scheme of Li et
al. [NIPS'12] and feature hashing (FH) of Weinberger et al. [ICML'09], both of
which have found numerous applications, i.e. in approximate near-neighbour
search with LSH and large-scale classification with SVM.
We consider mixed tabulation hashing of Dahlgaard et al.[FOCS'15] which was
proved to perform like a truly random hash function in many applications,
including OPH. Here we first show improved concentration bounds for FH with
truly random hashing and then argue that mixed tabulation performs similar for
sparse input. Our main contribution, however, is an experimental comparison of
different hashing schemes when used inside FH, OPH, and LSH.
We find that mixed tabulation hashing is almost as fast as the
multiply-mod-prime scheme ax+b mod p. Mutiply-mod-prime is guaranteed to work
well on sufficiently random data, but we demonstrate that in the above
applications, it can lead to bias and poor concentration on both real-world and
synthetic data. We also compare with the popular MurmurHash3, which has no
proven guarantees. Mixed tabulation and MurmurHash3 both perform similar to
truly random hashing in our experiments. However, mixed tabulation is 40%
faster than MurmurHash3, and it has the proven guarantee of good performance on
all possible input.Comment: Preliminary version of this paper will appear at NIPS 201
Optimizing Ranking Measures for Compact Binary Code Learning
Hashing has proven a valuable tool for large-scale information retrieval.
Despite much success, existing hashing methods optimize over simple objectives
such as the reconstruction error or graph Laplacian related loss functions,
instead of the performance evaluation criteria of interest---multivariate
performance measures such as the AUC and NDCG. Here we present a general
framework (termed StructHash) that allows one to directly optimize multivariate
performance measures. The resulting optimization problem can involve
exponentially or infinitely many variables and constraints, which is more
challenging than standard structured output learning. To solve the StructHash
optimization problem, we use a combination of column generation and
cutting-plane techniques. We demonstrate the generality of StructHash by
applying it to ranking prediction and image retrieval, and show that it
outperforms a few state-of-the-art hashing methods.Comment: Appearing in Proc. European Conference on Computer Vision 201
- …