There has been many papers in academic literature on quantizing weight
tensors in deep learning models to reduce inference latency and memory
footprint. TVM also has the ability to quantize weights and support low-bit
computations. Although quantization is typically expected to improve inference
time, in TVM, the performance of 8-bit quantization does not meet the
expectations. Typically, when applying 8-bit quantization to a deep learning
model, it is usually expected to achieve around 50% of the full-precision
inference time. However, in this particular case, not only does the quantized
version fail to achieve the desired performance boost, but it actually performs
worse, resulting in an inference time that is about 2 times as slow as the
non-quantized version. In this project, we thoroughly investigate the reasons
behind the underperformance and assess the compatibility and optimization
opportunities of 8-bit quantization in TVM. We discuss the optimization of two
different types of tasks: computation-bound and memory-bound, and provide a
detailed comparison of various optimization techniques in TVM. Through the
identification of performance issues, we have successfully improved
quantization by addressing a bug in graph building. Furthermore, we analyze
multiple optimization strategies to achieve the optimal quantization result.
The best experiment achieves 163.88% improvement compared with the TVM compiled
baseline in inference time for the compute-bound task and 194.98% for the
memory-bound task