2,421 research outputs found
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Recurrent Neural Networks (RNNs) are powerful sequence modeling tools.
However, when dealing with high dimensional inputs, the training of RNNs
becomes computational expensive due to the large number of model parameters.
This hinders RNNs from solving many important computer vision tasks, such as
Action Recognition in Videos and Image Captioning. To overcome this problem, we
propose a compact and flexible structure, namely Block-Term tensor
decomposition, which greatly reduces the parameters of RNNs and improves their
training efficiency. Compared with alternative low-rank approximations, such as
tensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only
more concise (when using the same rank), but also able to attain a better
approximation to the original RNNs with much fewer parameters. On three
challenging tasks, including Action Recognition in Videos, Image Captioning and
Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of
both prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes
17,388 times fewer parameters than the standard LSTM to achieve an accuracy
improvement over 15.6\% in the Action Recognition task on the UCF11 dataset.Comment: CVPR201
Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term
Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved
promising performance in sequential data modeling. The hidden layers in RNNs
can be regarded as the memory units, which are helpful in storing information
in sequential contexts. However, when dealing with high dimensional input data,
such as video and text, the input-to-hidden linear transformation in RNNs
brings high memory usage and huge computational cost. This makes the training
of RNNs unscalable and difficult. To address this challenge, we propose a novel
compact LSTM model, named as TR-LSTM, by utilizing the low-rank tensor ring
decomposition (TRD) to reformulate the input-to-hidden transformation. Compared
with other tensor decomposition methods, TR-LSTM is more stable. In addition,
TR-LSTM can complete an end-to-end training and also provide a fundamental
building block for RNNs in handling large input data. Experiments on real-world
action recognition datasets have demonstrated the promising performance of the
proposed TR-LSTM compared with the tensor train LSTM and other state-of-the-art
competitors.Comment: 9 page
- …