533 research outputs found
Neural Networks Compression for Language Modeling
In this paper, we consider several compression techniques for the language
modeling problem based on recurrent neural networks (RNNs). It is known that
conventional RNNs, e.g, LSTM-based networks in language modeling, are
characterized with either high space complexity or substantial inference time.
This problem is especially crucial for mobile applications, in which the
constant interaction with the remote server is inappropriate. By using the Penn
Treebank (PTB) dataset we compare pruning, quantization, low-rank
factorization, tensor train decomposition for LSTM networks in terms of model
size and suitability for fast inference.Comment: Keywords: LSTM, RNN, language modeling, low-rank factorization,
pruning, quantization. Published by Springer in the LNCS series, 7th
International Conference on Pattern Recognition and Machine Intelligence,
201
- …