Vector Quantization (VQ) is an appealing model compression method to obtain a
tiny model with less accuracy loss. While methods to obtain better codebooks
and codes under fixed clustering dimensionality have been extensively studied,
optimizations of the vectors in favour of clustering performance are not
carefully considered, especially via the reduction of vector dimensionality.
This paper reports our recent progress on the combination of dimensionality
compression and vector quantization, proposing a Low-Rank Representation Vector
Quantization (LR2VQ) method that outperforms previous VQ
algorithms in various tasks and architectures. LR2VQ joins
low-rank representation with subvector clustering to construct a new kind of
building block that is directly optimized through end-to-end training over the
task loss. Our proposed design pattern introduces three hyper-parameters, the
number of clusters k, the size of subvectors m and the clustering
dimensionality d~. In our method, the compression ratio could be
directly controlled by m, and the final accuracy is solely determined by
d~. We recognize d~ as a trade-off between low-rank
approximation error and clustering error and carry out both theoretical
analysis and experimental observations that empower the estimation of the
proper d~ before fine-tunning. With a proper d~, we evaluate
LR2VQ with ResNet-18/ResNet-50 on ImageNet classification
datasets, achieving 2.8\%/1.0\% top-1 accuracy improvements over the current
state-of-the-art VQ-based compression algorithms with 43×/31×
compression factor