176 research outputs found
Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks
Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of the
most promising architectures for variable computation. ACT adapts to the input
sequence by being able to look at each sample more than once, and learn how
many times it should do it. In this paper, we compare ACT to Repeat-RNN, a
novel architecture based on repeating each sample a fixed number of times. We
found surprising results, where Repeat-RNN performs as good as ACT in the
selected tasks. Source code in TensorFlow and PyTorch is publicly available at
https://imatge-upc.github.io/danifojo-2018-repeatrnn/Comment: Accepted as workshop paper at ICLR 201
- …