1,308 research outputs found
Fast ConvNets Using Group-wise Brain Damage
We revisit the idea of brain damage, i.e. the pruning of the coefficients of
a neural network, and suggest how brain damage can be modified and used to
speedup convolutional layers. The approach uses the fact that many efficient
implementations reduce generalized convolutions to matrix multiplications. The
suggested brain damage process prunes the convolutional kernel tensor in a
group-wise fashion by adding group-sparsity regularization to the standard
training process. After such group-wise pruning, convolutions can be reduced to
multiplications of thinned dense matrices, which leads to speedup. In the
comparison on AlexNet, the method achieves very competitive performance
Online Filter Clustering and Pruning for Efficient Convnets
Pruning filters is an effective method for accelerating deep neural networks
(DNNs), but most existing approaches prune filters on a pre-trained network
directly which limits in acceleration. Although each filter has its own effect
in DNNs, but if two filters are the same with each other, we could prune one
safely. In this paper, we add an extra cluster loss term in the loss function
which can force filters in each cluster to be similar online. After training,
we keep one filter in each cluster and prune others and fine-tune the pruned
network to compensate for the loss. Particularly, the clusters in every layer
can be defined firstly which is effective for pruning DNNs within residual
blocks. Extensive experiments on CIFAR10 and CIFAR100 benchmarks demonstrate
the competitive performance of our proposed filter pruning method.Comment: 5 pages, 4 figure
- …