65,327 research outputs found
Efficient Neural Network Compression
Network compression reduces the computational complexity and memory
consumption of deep neural networks by reducing the number of parameters. In
SVD-based network compression, the right rank needs to be decided for every
layer of the network. In this paper, we propose an efficient method for
obtaining the rank configuration of the whole network. Unlike previous methods
which consider each layer separately, our method considers the whole network to
choose the right rank configuration. We propose novel accuracy metrics to
represent the accuracy and complexity relationship for a given neural network.
We use these metrics in a non-iterative fashion to obtain the right rank
configuration which satisfies the constraints on FLOPs and memory while
maintaining sufficient accuracy. Experiments show that our method provides
better compromise between accuracy and computational complexity/memory
consumption while performing compression at much higher speed. For VGG-16 our
network can reduce the FLOPs by 25% and improve accuracy by 0.7% compared to
the baseline, while requiring only 3 minutes on a CPU to search for the right
rank configuration. Previously, similar results were achieved in 4 hours with 8
GPUs. The proposed method can be used for lossless compression of a neural
network as well. The better accuracy and complexity compromise, as well as the
extremely fast speed of our method makes it suitable for neural network
compression
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
This paper demonstrates a method for tensorizing neural networks based upon
an efficient way of approximating scale invariant quantum states, the
Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a
replacement for the fully connected layers in a convolutional neural network
and test this implementation on the CIFAR-10 and CIFAR-100 datasets. The
proposed method outperforms factorization using tensor trains, providing
greater compression for the same level of accuracy and greater accuracy for the
same level of compression. We demonstrate MERA layers with 14000 times fewer
parameters and a reduction in accuracy of less than 1% compared to the
equivalent fully connected layers, scaling like O(N).Comment: 8 pages, 2 figure
- …