996 research outputs found
Transformers Learn Shortcuts to Automata
Algorithmic reasoning requires capabilities which are most naturally
understood through recurrent models of computation, like the Turing machine.
However, Transformer models, while lacking recurrence, are able to perform such
reasoning using far fewer layers than the number of reasoning steps. This
raises the question: what solutions are learned by these shallow and
non-recurrent models? We find that a low-depth Transformer can represent the
computations of any finite-state automaton (thus, any bounded-memory
algorithm), by hierarchically reparameterizing its recurrent dynamics. Our
theoretical results characterize shortcut solutions, whereby a Transformer with
layers can exactly replicate the computation of an automaton on an input
sequence of length . We find that polynomial-sized -depth
solutions always exist; furthermore, -depth simulators are surprisingly
common, and can be understood using tools from Krohn-Rhodes theory and circuit
complexity. Empirically, we perform synthetic experiments by training
Transformers to simulate a wide variety of automata, and show that shortcut
solutions can be learned via standard training. We further investigate the
brittleness of these solutions and propose potential mitigations
Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks
Transformer networks have seen great success in natural language processing
and machine vision, where task objectives such as next word prediction and
image classification benefit from nuanced context sensitivity across
high-dimensional inputs. However, there is an ongoing debate about how and when
transformers can acquire highly structured behavior and achieve systematic
generalization. Here, we explore how well a causal transformer can perform a
set of algorithmic tasks, including copying, sorting, and hierarchical
compositions of these operations. We demonstrate strong generalization to
sequences longer than those used in training by replacing the standard
positional encoding typically used in transformers with labels arbitrarily
paired with items in the sequence. We search for the layer and head
configuration sufficient to solve these tasks, then probe for signs of
systematic processing in latent representations and attention patterns. We show
that two-layer transformers learn reliable solutions to multi-level problems,
develop signs of task decomposition, and encode input items in a way that
encourages the exploitation of shared computation across related tasks. These
results provide key insights into how attention layers support structured
computation both within a task and across multiple tasks.Comment: 18 page
Engineering A Large Language Model From Scratch
The proliferation of deep learning in natural language processing (NLP) has
led to the development and release of innovative technologies capable of
understanding and generating human language with remarkable proficiency.
Atinuke, a Transformer-based neural network, optimises performance across
various language tasks by utilising a unique configuration. The architecture
interweaves layers for processing sequential data with attention mechanisms to
draw meaningful affinities between inputs and outputs. Due to the configuration
of its topology and hyperparameter tuning, it can emulate human-like language
by extracting features and learning complex mappings. Atinuke is modular,
extensible, and integrates seamlessly with existing machine learning pipelines.
Advanced matrix operations like softmax, embeddings, and multi-head attention
enable nuanced handling of textual, acoustic, and visual signals. By unifying
modern deep learning techniques with software design principles and
mathematical theory, the system achieves state-of-the-art results on natural
language tasks whilst remaining interpretable and robust
Neural Sequence-to-grid Module for Learning Symbolic Rules
Logical reasoning tasks over symbols, such as learning arithmetic operations
and computer program evaluations, have become challenges to deep learning. In
particular, even state-of-the-art neural networks fail to achieve
\textit{out-of-distribution} (OOD) generalization of symbolic reasoning tasks,
whereas humans can easily extend learned symbolic rules. To resolve this
difficulty, we propose a neural sequence-to-grid (seq2grid) module, an input
preprocessor that automatically segments and aligns an input sequence into a
grid. As our module outputs a grid via a novel differentiable mapping, any
neural network structure taking a grid input, such as ResNet or TextCNN, can be
jointly trained with our module in an end-to-end fashion. Extensive experiments
show that neural networks having our module as an input preprocessor achieve
OOD generalization on various arithmetic and algorithmic problems including
number sequence prediction problems, algebraic word problems, and computer
program evaluation problems while other state-of-the-art sequence transduction
models cannot. Moreover, we verify that our module enhances TextCNN to solve
the bAbI QA tasks without external memory.Comment: 9 pages, 9 figures, AAAI 202
With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition
In egocentric videos, actions occur in quick succession. We capitalise on the
action's temporal context and propose a method that learns to attend to
surrounding actions in order to improve recognition performance. To incorporate
the temporal context, we propose a transformer-based multimodal model that
ingests video and audio as input modalities, with an explicit language model
providing action sequence context to enhance the predictions. We test our
approach on EPIC-KITCHENS and EGTEA datasets reporting state-of-the-art
performance. Our ablations showcase the advantage of utilising temporal context
as well as incorporating audio input modality and language model to rescore
predictions. Code and models at: https://github.com/ekazakos/MTCN.Comment: Accepted at BMVC 202
- …