Large language models (LLMs) have revolutionized natural language processing
tasks. However, their practical deployment is hindered by their immense memory
and computation requirements. Although recent post-training quantization (PTQ)
methods are effective in reducing memory footprint and improving the
computational efficiency of LLM, they hand-craft quantization parameters, which
leads to low performance and fails to deal with extremely low-bit quantization.
To tackle this issue, we introduce an Omnidirectionally calibrated Quantization
(OmniQuant) technique for LLMs, which achieves good performance in diverse
quantization settings while maintaining the computational efficiency of PTQ by
efficiently optimizing various quantization parameters. OmniQuant comprises two
innovative components including Learnable Weight Clipping (LWC) and Learnable
Equivalent Transformation (LET). LWC modulates the extreme values of weights by
optimizing the clipping threshold. Meanwhile, LET tackles activation outliers
by shifting the challenge of quantization from activations to weights through a
learnable equivalent transformation. Operating within a differentiable
framework using block-wise error minimization, OmniQuant can optimize the
quantization process efficiently for both weight-only and weight-activation
quantization. For instance, the LLaMA-2 model family with the size of 7-70B can
be processed with OmniQuant on a single A100-40G GPU within 1-16 hours using
128 samples. Extensive experiments validate OmniQuant's superior performance
across diverse quantization configurations such as W4A4, W6A6, W4A16, W3A16,
and W2A16. Additionally, OmniQuant demonstrates effectiveness in
instruction-tuned models and delivers notable improvements in inference speed
and memory reduction on real devices. Codes and models are available at
\url{https://github.com/OpenGVLab/OmniQuant}.Comment: Updated result with 2-bit quantization. A differentiable quantization
method for LL