3 research outputs found
Quantized Feature Distillation for Network Quantization
Neural network quantization aims to accelerate and trim full-precision neural
network models by using low bit approximations. Methods adopting the
quantization aware training (QAT) paradigm have recently seen a rapid growth,
but are often conceptually complicated. This paper proposes a novel and highly
effective QAT method, quantized feature distillation (QFD). QFD first trains a
quantized (or binarized) representation as the teacher, then quantize the
network using knowledge distillation (KD). Quantitative results show that QFD
is more flexible and effective (i.e., quantization friendly) than previous
quantization methods. QFD surpasses existing methods by a noticeable margin on
not only image classification but also object detection, albeit being much
simpler. Furthermore, QFD quantizes ViT and Swin-Transformer on MS-COCO
detection and segmentation, which verifies its potential in real world
deployment. To the best of our knowledge, this is the first time that vision
transformers have been quantized in object detection and image segmentation
tasks.Comment: AAAI202
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
The quantization of deep neural networks (QDNNs) has been actively studied
for deployment in edge devices. Recent studies employ the knowledge
distillation (KD) method to improve the performance of quantized networks. In
this study, we propose stochastic precision ensemble training for QDNNs (SPEQ).
SPEQ is a knowledge distillation training scheme; however, the teacher is
formed by sharing the model parameters of the student network. We obtain the
soft labels of the teacher by changing the bit precision of the activation
stochastically at each layer of the forward-pass computation. The student model
is trained with these soft labels to reduce the activation quantization noise.
The cosine similarity loss is employed, instead of the KL-divergence, for KD
training. As the teacher model changes continuously by random bit-precision
assignment, it exploits the effect of stochastic ensemble KD. SPEQ outperforms
the existing quantization training methods in various tasks, such as image
classification, question-answering, and transfer learning without the need for
cumbersome teacher networks