217 research outputs found
Neural Networks Compression for Language Modeling
In this paper, we consider several compression techniques for the language
modeling problem based on recurrent neural networks (RNNs). It is known that
conventional RNNs, e.g, LSTM-based networks in language modeling, are
characterized with either high space complexity or substantial inference time.
This problem is especially crucial for mobile applications, in which the
constant interaction with the remote server is inappropriate. By using the Penn
Treebank (PTB) dataset we compare pruning, quantization, low-rank
factorization, tensor train decomposition for LSTM networks in terms of model
size and suitability for fast inference.Comment: Keywords: LSTM, RNN, language modeling, low-rank factorization,
pruning, quantization. Published by Springer in the LNCS series, 7th
International Conference on Pattern Recognition and Machine Intelligence,
201
Deep Quaternion Networks
The field of deep learning has seen significant advancement in recent years.
However, much of the existing work has been focused on real-valued numbers.
Recent work has shown that a deep learning system using the complex numbers can
be deeper for a fixed parameter budget compared to its real-valued counterpart.
In this work, we explore the benefits of generalizing one step further into the
hyper-complex numbers, quaternions specifically, and provide the architecture
components needed to build deep quaternion networks. We develop the theoretical
basis by reviewing quaternion convolutions, developing a novel quaternion
weight initialization scheme, and developing novel algorithms for quaternion
batch-normalization. These pieces are tested in a classification model by
end-to-end training on the CIFAR-10 and CIFAR-100 data sets and a segmentation
model by end-to-end training on the KITTI Road Segmentation data set. These
quaternion networks show improved convergence compared to real-valued and
complex-valued networks, especially on the segmentation task, while having
fewer parametersComment: IJCNN 2018, 8 pages, 1 figur
- …