14 research outputs found
Scaling Transformer to 1M tokens and beyond with RMT
This technical report presents the application of a recurrent memory to
extend the context length of BERT, one of the most effective Transformer-based
models in natural language processing. By leveraging the Recurrent Memory
Transformer architecture, we have successfully increased the model's effective
context length to an unprecedented two million tokens, while maintaining high
memory retrieval accuracy. Our method allows for the storage and processing of
both local and global information and enables information flow between segments
of the input sequence through the use of recurrence. Our experiments
demonstrate the effectiveness of our approach, which holds significant
potential to enhance long-term dependency handling in natural language
understanding and generation tasks as well as enable large-scale context
processing for memory-intensive applications
Recurrent Memory Transformer
Transformer-based models show their effectiveness across multiple domains and
tasks. The self-attention allows to combine information from all sequence
elements into context-aware representations. However, global and local
information has to be stored mostly in the same element-wise representations.
Moreover, the length of an input sequence is limited by quadratic computational
complexity of self-attention.
In this work, we propose and study a memory-augmented segment-level recurrent
Transformer (Recurrent Memory Transformer). Memory allows to store and process
local and global information as well as to pass information between segments of
the long sequence with the help of recurrence. We implement a memory mechanism
with no changes to Transformer model by adding special memory tokens to the
input or output sequence. Then Transformer is trained to control both memory
operations and sequence representations processing.
Results of experiments show that our model performs on par with the
Transformer-XL on language modeling for smaller memory sizes and outperforms it
for tasks that require longer sequence processing. We show that adding memory
tokens to Tr-XL is able to improve it performance. This makes Recurrent Memory
Transformer a promising architecture for applications that require learning of
long-term dependencies and general purpose in memory processing, such as
algorithmic tasks and reasoning
D.V.: Project animat brain: Designing the animat control system on the basis of the functional systems theory
Abstract. The paper describes the design of an animat control system (the Animat Brain) that is based of the Petr K. Anokhin's theory of functional systems. We propose the animat control system that consists of a set of functional systems (FSs) and enables predictive and purposeful behavior. Each FS consists of two neural networks: the Actor and the Model. The Actors are intended to form chains of actions and the Models are intended to predict futures events. There are primary and secondary repertoires of behaviors: the primary repertoire is formed by evolution; the secondary repertoire is formed by means of learning. The paper describes both principles of the Animat Brain operation and the particular model of predictive behavior in cellular landmark environment.