1 research outputs found
Scaled Quantization for the Vision Transformer
Quantization using a small number of bits shows promise for reducing latency
and memory usage in deep neural networks. However, most quantization methods
cannot readily handle complicated functions such as exponential and square
root, and prior approaches involve complex training processes that must
interact with floating-point values. This paper proposes a robust method for
the full integer quantization of vision transformer networks without requiring
any intermediate floating-point computations. The quantization techniques can
be applied in various hardware or software implementations, including
processor/memory architectures and FPGAs.Comment: 9 pages, 0 figur