155 research outputs found
Learning Sparse Neural Networks via Sensitivity-Driven Regularization
The ever-increasing number of parameters in deep neural networks poses
challenges for memory-limited applications. Regularize-and-prune methods aim at
meeting these challenges by sparsifying the network weights. In this context we
quantify the output sensitivity to the parameters (i.e. their relevance to the
network output) and introduce a regularization term that gradually lowers the
absolute value of parameters with low sensitivity. Thus, a very large fraction
of the parameters approach zero and are eventually set to zero by simple
thresholding. Our method surpasses most of the recent techniques both in terms
of sparsity and error rates. In some cases, the method reaches twice the
sparsity obtained by other techniques at equal error rates
- …