3 research outputs found
Scalar Quantization as Sparse Least Square Optimization
Quantization can be used to form new vectors/matrices with shared values
close to the original. In recent years, the popularity of scalar quantization
for value-sharing applications has been soaring as it has been found huge
utilities in reducing the complexity of neural networks. Existing
clustering-based quantization techniques, while being well-developed, have
multiple drawbacks including the dependency of the random seed, empty or
out-of-the-range clusters, and high time complexity for a large number of
clusters. To overcome these problems, in this paper, the problem of scalar
quantization is examined from a new perspective, namely sparse least square
optimization. Specifically, inspired by the property of sparse least square
regression, several quantization algorithms based on least square are
proposed. In addition, similar schemes with and
regularization are proposed. Furthermore, to compute quantization results with
a given amount of values/clusters, this paper designed an iterative method and
a clustering-based method, and both of them are built on sparse least square.
The paper shows that the latter method is mathematically equivalent to an
improved version of k-means clustering-based quantization algorithm, although
the two algorithms originated from different intuitions. The algorithms
proposed were tested with three types of data and their computational
performances, including information loss, time consumption, and the
distribution of the values of the sparse vectors, were compared and analyzed.
The paper offers a new perspective to probe the area of quantization, and the
algorithms proposed can outperform existing methods especially under some
bit-width reduction scenarios, when the required post-quantization resolution
(number of values) is not significantly lower than the original number
Weight Quantization for Multi-Layer Perceptrons using Soft-Weight Sharing
We propose a novel approach for quantizing the weights of a multi-layer perceptron for ecient VLSI implementation. Our approach uses soft-weight sharing, previously proposed for improved generalization and considers the weights not as constant numbers but as random variables drawn from a Gaussian mixture distribution; which includes as its special cases k-means clustering and uniform quantization. This approach couples the training of weights for reduced error with their quantization. Simulations on synthetic and real regression and classi- cation data sets compare various quantization schemes and demonstrate the advantage of the coupled training of distribution parameters.