Despite progress across a broad range of applications, Transformers have
limited success in systematic generalization. The situation is especially
frustrating in the case of algorithmic tasks, where they often fail to find
intuitive solutions that route relevant information to the right node/operation
at the right time in the grid represented by Transformer columns. To facilitate
the learning of useful control flow, we propose two modifications to the
Transformer architecture, copy gate and geometric attention. Our novel Neural
Data Router (NDR) achieves 100% length generalization accuracy on the classic
compositional table lookup task, as well as near-perfect accuracy on the simple
arithmetic task and a new variant of ListOps testing for generalization across
computational depths. NDR's attention and gating patterns tend to be
interpretable as an intuitive form of neural routing. Our code is public.Comment: Accepted to ICLR 202