1 research outputs found
Neural network compression via learnable wavelet transforms
Wavelets are well known for data compression, yet have rarely been applied to
the compression of neural networks. This paper shows how the fast wavelet
transform can be used to compress linear layers in neural networks. Linear
layers still occupy a significant portion of the parameters in recurrent neural
networks (RNNs). Through our method, we can learn both the wavelet bases and
corresponding coefficients to efficiently represent the linear layers of RNNs.
Our wavelet compressed RNNs have significantly fewer parameters yet still
perform competitively with the state-of-the-art on synthetic and real-world RNN
benchmarks. Wavelet optimization adds basis flexibility, without large numbers
of extra weights. Source code is available at
https://github.com/v0lta/Wavelet-network-compression