15,027 research outputs found
Network Sketching: Exploiting Binary Structure in Deep CNNs
Convolutional neural networks (CNNs) with deep architectures have
substantially advanced the state-of-the-art in computer vision tasks. However,
deep networks are typically resource-intensive and thus difficult to be
deployed on mobile devices. Recently, CNNs with binary weights have shown
compelling efficiency to the community, whereas the accuracy of such models is
usually unsatisfactory in practice. In this paper, we introduce network
sketching as a novel technique of pursuing binary-weight CNNs, targeting at
more faithful inference and better trade-off for practical applications. Our
basic idea is to exploit binary structure directly in pre-trained filter banks
and produce binary-weight models via tensor expansion. The whole process can be
treated as a coarse-to-fine model approximation, akin to the pencil drawing
steps of outlining and shading. To further speedup the generated models, namely
the sketches, we also propose an associative implementation of binary tensor
convolutions. Experimental results demonstrate that a proper sketch of AlexNet
(or ResNet) outperforms the existing binary-weight models by large margins on
the ImageNet large scale classification task, while the committed memory for
network parameters only exceeds a little.Comment: To appear in CVPR201
Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks
Quantized Neural Networks (QNNs), which use low bitwidth numbers for
representing parameters and performing computations, have been proposed to
reduce the computation complexity, storage size and memory usage. In QNNs,
parameters and activations are uniformly quantized, such that the
multiplications and additions can be accelerated by bitwise operations.
However, distributions of parameters in Neural Networks are often imbalanced,
such that the uniform quantization determined from extremal values may under
utilize available bitwidth. In this paper, we propose a novel quantization
method that can ensure the balance of distributions of quantized values. Our
method first recursively partitions the parameters by percentiles into balanced
bins, and then applies uniform quantization. We also introduce computationally
cheaper approximations of percentiles to reduce the computation overhead
introduced. Overall, our method improves the prediction accuracies of QNNs
without introducing extra computation during inference, has negligible impact
on training speed, and is applicable to both Convolutional Neural Networks and
Recurrent Neural Networks. Experiments on standard datasets including ImageNet
and Penn Treebank confirm the effectiveness of our method. On ImageNet, the
top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7\%, which is
superior to the state-of-the-arts of QNNs
- …