1 research outputs found
Searching for Low-Bit Weights in Quantized Neural Networks
Quantized neural networks with low-bit weights and activations are attractive
for developing AI accelerators. However, the quantization functions used in
most conventional quantization methods are non-differentiable, which increases
the optimization difficulty of quantized networks. Compared with full-precision
parameters (i.e., 32-bit floating numbers), low-bit values are selected from a
much smaller set. For example, there are only 16 possibilities in 4-bit space.
Thus, we present to regard the discrete weights in an arbitrary quantized
neural network as searchable variables, and utilize a differential method to
search them accurately. In particular, each weight is represented as a
probability distribution over the discrete value set. The probabilities are
optimized during training and the values with the highest probability are
selected to establish the desired quantized network. Experimental results on
benchmarks demonstrate that the proposed method is able to produce quantized
neural networks with higher performance over the state-of-the-art methods on
both image classification and super-resolution tasks