Vision Transformers (ViTs) overpass Convolutional Neural Networks in
processing incomplete inputs because they do not require the imputation of
missing values. Therefore, ViTs are well suited for sequential decision-making,
e.g. in the Active Visual Exploration problem. However, they are
computationally inefficient because they perform a full forward pass each time
a piece of new sequential information arrives.
To reduce this computational inefficiency, we introduce the TOken REcycling
(TORE) modification for the ViT inference, which can be used with any
architecture. TORE divides ViT into two parts, iterator and aggregator. An
iterator processes sequential information separately into midway tokens, which
are cached. The aggregator processes midway tokens jointly to obtain the
prediction. This way, we can reuse the results of computations made by
iterator.
Except for efficient sequential inference, we propose a complementary
training policy, which significantly reduces the computational burden
associated with sequential decision-making while achieving state-of-the-art
accuracy.Comment: The code will be released upon acceptanc