779 research outputs found
Tensor Regression Networks
Convolutional neural networks typically consist of many convolutional layers
followed by one or more fully connected layers. While convolutional layers map
between high-order activation tensors, the fully connected layers operate on
flattened activation vectors. Despite empirical success, this approach has
notable drawbacks. Flattening followed by fully connected layers discards
multilinear structure in the activations and requires many parameters. We
address these problems by incorporating tensor algebraic operations that
preserve multilinear structure at every layer. First, we introduce Tensor
Contraction Layers (TCLs) that reduce the dimensionality of their input while
preserving their multilinear structure using tensor contraction. Next, we
introduce Tensor Regression Layers (TRLs), which express outputs through a
low-rank multilinear mapping from a high-order activation tensor to an output
tensor of arbitrary order. We learn the contraction and regression factors
end-to-end, and produce accurate nets with fewer parameters. Additionally, our
layers regularize networks by imposing low-rank constraints on the activations
(TCL) and regression weights (TRL). Experiments on ImageNet show that, applied
to VGG and ResNet architectures, TCLs and TRLs reduce the number of parameters
compared to fully connected layers by more than 65% while maintaining or
increasing accuracy. In addition to the space savings, our approach's ability
to leverage topological structure can be crucial for structured data such as
MRI. In particular, we demonstrate significant performance improvements over
comparable architectures on three tasks associated with the UK Biobank dataset
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Recurrent Neural Networks (RNNs) are powerful sequence modeling tools.
However, when dealing with high dimensional inputs, the training of RNNs
becomes computational expensive due to the large number of model parameters.
This hinders RNNs from solving many important computer vision tasks, such as
Action Recognition in Videos and Image Captioning. To overcome this problem, we
propose a compact and flexible structure, namely Block-Term tensor
decomposition, which greatly reduces the parameters of RNNs and improves their
training efficiency. Compared with alternative low-rank approximations, such as
tensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only
more concise (when using the same rank), but also able to attain a better
approximation to the original RNNs with much fewer parameters. On three
challenging tasks, including Action Recognition in Videos, Image Captioning and
Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of
both prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes
17,388 times fewer parameters than the standard LSTM to achieve an accuracy
improvement over 15.6\% in the Action Recognition task on the UCF11 dataset.Comment: CVPR201
Applying Deep Learning To Airbnb Search
The application to search ranking is one of the biggest machine learning
success stories at Airbnb. Much of the initial gains were driven by a gradient
boosted decision tree model. The gains, however, plateaued over time. This
paper discusses the work done in applying neural networks in an attempt to
break out of that plateau. We present our perspective not with the intention of
pushing the frontier of new modeling techniques. Instead, ours is a story of
the elements we found useful in applying neural networks to a real life
product. Deep learning was steep learning for us. To other teams embarking on
similar journeys, we hope an account of our struggles and triumphs will provide
some useful pointers. Bon voyage!Comment: 8 page
- …