65 research outputs found
Oscillation-free Quantization for Low-bit Vision Transformers
Weight oscillation is an undesirable side effect of quantization-aware
training, in which quantized weights frequently jump between two quantized
levels, resulting in training instability and a sub-optimal final model. We
discover that the learnable scaling factor, a widely-used
setting in quantization aggravates weight oscillation. In this study, we
investigate the connection between the learnable scaling factor and quantized
weight oscillation and use ViT as a case driver to illustrate the findings and
remedies. In addition, we also found that the interdependence between quantized
weights in and of a self-attention layer makes
ViT vulnerable to oscillation. We, therefore, propose three techniques
accordingly: statistical weight quantization () to improve
quantization robustness compared to the prevalent learnable-scale-based method;
confidence-guided annealing () that freezes the weights with
and calms the oscillating weights; and
- reparameterization () to resolve the
query-key intertwined oscillation and mitigate the resulting gradient
misestimation. Extensive experiments demonstrate that these proposed techniques
successfully abate weight oscillation and consistently achieve substantial
accuracy improvement on ImageNet. Specifically, our 2-bit DeiT-T/DeiT-S
algorithms outperform the previous state-of-the-art by 9.8% and 7.7%,
respectively. Code and models are available at: https://github.com/nbasyl/OFQ.Comment: Proceedings of the 40 th International Conference on Machine
Learning, Honolulu, Hawaii, USA. PMLR 202, 202
Efficient Quantization-aware Training with Adaptive Coreset Selection
The expanding model size and computation of deep neural networks (DNNs) have
increased the demand for efficient model deployment methods. Quantization-aware
training (QAT) is a representative model compression method to leverage
redundancy in weights and activations. However, most existing QAT methods
require end-to-end training on the entire dataset, which suffers from long
training time and high energy costs. Coreset selection, aiming to improve data
efficiency utilizing the redundancy of training data, has also been widely used
for efficient training. In this work, we propose a new angle through the
coreset selection to improve the training efficiency of quantization-aware
training. Based on the characteristics of QAT, we propose two metrics: error
vector score and disagreement score, to quantify the importance of each sample
during training. Guided by these two metrics of importance, we proposed a
quantization-aware adaptive coreset selection (ACS) method to select the data
for the current training epoch. We evaluate our method on various networks
(ResNet-18, MobileNetV2), datasets(CIFAR-100, ImageNet-1K), and under different
quantization settings. Compared with previous coreset selection methods, our
method significantly improves QAT performance with different dataset fractions.
Our method can achieve an accuracy of 68.39% of 4-bit quantized ResNet-18 on
the ImageNet-1K dataset with only a 10% subset, which has an absolute gain of
4.24% compared to the baseline.Comment: Code: https://github.com/HuangOwen/QAT-AC
1-Phenyl-3-(2,4,6-trimethoxyÂphenÂyl)prop-2-en-1-one
In the title compound, C18H18O4, the dihedral angle between the mean planes of the aromatic rings is 7.39 (6)°. The dihedral angles between the linking C—C=C—C plane and the phenyl and benzene rings are 11.27 (5) and 4.20 (5)°, respectively
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
In this work, we study the 1-bit convolutional neural networks (CNNs), of
which both the weights and activations are binary. While being efficient, the
classification accuracy of the current 1-bit CNNs is much worse compared to
their counterpart real-valued CNN models on the large-scale dataset, like
ImageNet. To minimize the performance gap between the 1-bit and real-valued CNN
models, we propose a novel model, dubbed Bi-Real net, which connects the real
activations (after the 1-bit convolution and/or BatchNorm layer, before the
sign function) to activations of the consecutive block, through an identity
shortcut. Consequently, compared to the standard 1-bit CNN, the
representational capability of the Bi-Real net is significantly enhanced and
the additional cost on computation is negligible. Moreover, we develop a
specific training algorithm including three technical novelties for 1- bit
CNNs. Firstly, we derive a tight approximation to the derivative of the
non-differentiable sign function with respect to activation. Secondly, we
propose a magnitude-aware gradient with respect to the weight for updating the
weight parameters. Thirdly, we pre-train the real-valued CNN model with a clip
function, rather than the ReLU function, to better initialize the Bi-Real net.
Experiments on ImageNet show that the Bi-Real net with the proposed training
algorithm achieves 56.4% and 62.2% top-1 accuracy with 18 layers and 34 layers,
respectively. Compared to the state-of-the-arts (e.g., XNOR Net), Bi-Real net
achieves up to 10% higher top-1 accuracy with more memory saving and lower
computational cost. Keywords: binary neural network, 1-bit CNNs,
1-layer-per-blockComment: Accepted to European Conference on Computer Vision (ECCV) 2018. Code
is available on: https://github.com/liuzechun/Bi-Real-ne
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
We propose LLM-FP4 for quantizing both weights and activations in large
language models (LLMs) down to 4-bit floating-point values, in a post-training
manner. Existing post-training quantization (PTQ) solutions are primarily
integer-based and struggle with bit widths below 8 bits. Compared to integer
quantization, floating-point (FP) quantization is more flexible and can better
handle long-tail or bell-shaped distributions, and it has emerged as a default
choice in many hardware platforms. One characteristic of FP quantization is
that its performance largely depends on the choice of exponent bits and
clipping range. In this regard, we construct a strong FP-PTQ baseline by
searching for the optimal quantization parameters. Furthermore, we observe a
high inter-channel variance and low intra-channel variance pattern in
activation distributions, which adds activation quantization difficulty. We
recognize this pattern to be consistent across a spectrum of transformer models
designed for diverse tasks, such as LLMs, BERT, and Vision Transformer models.
To tackle this, we propose per-channel activation quantization and show that
these additional scaling factors can be reparameterized as exponential biases
of weights, incurring a negligible cost. Our method, for the first time, can
quantize both weights and activations in the LLaMA-13B to only 4-bit and
achieves an average score of 63.1 on the common sense zero-shot reasoning
tasks, which is only 5.8 lower than the full-precision model, significantly
outperforming the previous state-of-the-art by 12.7 points. Code is available
at: https://github.com/nbasyl/LLM-FP4.Comment: EMNLP 2023 Main Conferenc
- …