192 research outputs found
Automated Pruning for Deep Neural Network Compression
In this work we present a method to improve the pruning step of the current
state-of-the-art methodology to compress neural networks. The novelty of the
proposed pruning technique is in its differentiability, which allows pruning to
be performed during the backpropagation phase of the network training. This
enables an end-to-end learning and strongly reduces the training time. The
technique is based on a family of differentiable pruning functions and a new
regularizer specifically designed to enforce pruning. The experimental results
show that the joint optimization of both the thresholds and the network weights
permits to reach a higher compression rate, reducing the number of weights of
the pruned network by a further 14% to 33% compared to the current
state-of-the-art. Furthermore, we believe that this is the first study where
the generalization capabilities in transfer learning tasks of the features
extracted by a pruned network are analyzed. To achieve this goal, we show that
the representations learned using the proposed pruning methodology maintain the
same effectiveness and generality of those learned by the corresponding
non-compressed network on a set of different recognition tasks.Comment: 8 pages, 5 figures. Published as a conference paper at ICPR 201
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation
Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) are
useful for many practical tasks in machine learning. Synaptic weights, as well
as neuron activation functions within the deep network are typically stored
with high-precision formats, e.g. 32 bit floating point. However, since storage
capacity is limited and each memory access consumes power, both storage
capacity and memory access are two crucial factors in these networks. Here we
present a method and present the ADaPTION toolbox to extend the popular deep
learning library Caffe to support training of deep CNNs with reduced numerical
precision of weights and activations using fixed point notation. ADaPTION
includes tools to measure the dynamic range of weights and activations. Using
the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit
weights and activations with only 0.8% drop in Top-1 accuracy. The
quantization, especially of the activations, leads to increase of up to 50% of
sparsity especially in early and intermediate layers, which we exploit to skip
multiplications with zero, thus performing faster and computationally cheaper
inference.Comment: 10 pages, 5 figure
- …