1 research outputs found
From Quantized DNNs to Quantizable DNNs
This paper proposes Quantizable DNNs, a special type of DNNs that can
flexibly quantize its bit-width (denoted as `bit modes' thereafter) during
execution without further re-training. To simultaneously optimize for all bit
modes, a combinational loss of all bit modes is proposed, which enforces
consistent predictions ranging from low-bit mode to 32-bit mode. This
Consistency-based Loss may also be viewed as certain form of regularization
during training. Because outputs of matrix multiplication in different bit
modes have different distributions, we introduce Bit-Specific Batch
Normalization so as to reduce conflicts among different bit modes. Experiments
on CIFAR100 and ImageNet have shown that compared to quantized DNNs,
Quantizable DNNs not only have much better flexibility, but also achieve even
higher classification accuracy. Ablation studies further verify that the
regularization through the consistency-based loss indeed improves the model's
generalization performance