7 research outputs found
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Deep neural networks (DNN) have shown remarkable success in a variety of
machine learning applications. The capacity of these models (i.e., number of
parameters), endows them with expressive power and allows them to reach the
desired performance. In recent years, there is an increasing interest in
deploying DNNs to resource-constrained devices (i.e., mobile devices) with
limited energy, memory, and computational budget. To address this problem, we
propose Entropy-Constrained Trained Ternarization (EC2T), a general framework
to create sparse and ternary neural networks which are efficient in terms of
storage (e.g., at most two binary-masks and two full-precision values are
required to save a weight matrix) and computation (e.g., MAC operations are
reduced to a few accumulations plus two multiplications). This approach
consists of two steps. First, a super-network is created by scaling the
dimensions of a pre-trained model (i.e., its width and depth). Subsequently,
this super-network is simultaneously pruned (using an entropy constraint) and
quantized (that is, ternary values are assigned layer-wise) in a training
process, resulting in a sparse and ternary network representation. We validate
the proposed approach in CIFAR-10, CIFAR-100, and ImageNet datasets, showing
its effectiveness in image classification tasks.Comment: Proceedings of the CVPR'20 Joint Workshop on Efficient Deep Learning
in Computer Vision. Code is available at
https://github.com/d-becking/efficientCNN
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
The field of video compression has developed some of the most sophisticated
and efficient compression algorithms known in the literature, enabling very
high compressibility for little loss of information. Whilst some of these
techniques are domain specific, many of their underlying principles are
universal in that they can be adapted and applied for compressing different
types of data. In this work we present DeepCABAC, a compression algorithm for
deep neural networks that is based on one of the state-of-the-art video coding
techniques. Concretely, it applies a Context-based Adaptive Binary Arithmetic
Coder (CABAC) to the network's parameters, which was originally designed for
the H.264/AVC video coding standard and became the state-of-the-art for
lossless compression. Moreover, DeepCABAC employs a novel quantization scheme
that minimizes the rate-distortion function while simultaneously taking the
impact of quantization onto the accuracy of the network into account.
Experimental results show that DeepCABAC consistently attains higher
compression rates than previously proposed coding techniques for neural network
compression. For instance, it is able to compress the VGG16 ImageNet model by
x63.6 with no loss of accuracy, thus being able to represent the entire network
with merely 8.7MB. The source code for encoding and decoding can be found at
https://github.com/fraunhoferhhi/DeepCABAC