Image hashing is a principled approximate nearest neighbor approach to find
similar items to a query in a large collection of images. Hashing aims to learn
a binary-output function that maps an image to a binary vector. For optimal
retrieval performance, producing balanced hash codes with low-quantization
error to bridge the gap between the learning stage's continuous relaxation and
the inference stage's discrete quantization is important. However, in the
existing deep supervised hashing methods, coding balance and low-quantization
error are difficult to achieve and involve several losses. We argue that this
is because the existing quantization approaches in these methods are
heuristically constructed and not effective to achieve these objectives. This
paper considers an alternative approach to learning the quantization
constraints. The task of learning balanced codes with low quantization error is
re-formulated as matching the learned distribution of the continuous codes to a
pre-defined discrete, uniform distribution. This is equivalent to minimizing
the distance between two distributions. We then propose a computationally
efficient distributional distance by leveraging the discrete property of the
hash functions. This distributional distance is a valid distance and enjoys
lower time and sample complexities. The proposed single-loss quantization
objective can be integrated into any existing supervised hashing method to
improve code balance and quantization error. Experiments confirm that the
proposed approach substantially improves the performance of several
representative hashing~methods.Comment: CVPR 202