1 research outputs found
Progress Extrapolating Algorithmic Learning to Arbitrary Sequence Lengths
Recent neural network models for algorithmic tasks have led to significant
improvements in extrapolation to sequences much longer than training, but it
remains an outstanding problem that the performance still degrades for very
long or adversarial sequences. We present alternative architectures and
loss-terms to address these issues, and our testing of these approaches has not
detected any remaining extrapolation errors within memory constraints. We focus
on linear time algorithmic tasks including copy, parentheses parsing, and
binary addition. First, activation binning was used to discretize the trained
network in order to avoid computational drift from continuous operations, and a
binning-based digital loss term was added to encourage discretizable
representations. In addition, a localized differentiable memory (LDM)
architecture, in contrast to distributed memory access, addressed remaining
extrapolation errors and avoided unbounded growth of internal computational
states. Previous work has found that algorithmic extrapolation issues can also
be alleviated with approaches relying on program traces, but the current effort
does not rely on such traces.Comment: 7 pages, 1 figure, 1 table, minor edits to clarify explanation