We propose a novel perspective of the attention mechanism by reinventing it
as a memory architecture for neural networks, namely Neural Attention Memory
(NAM). NAM is a memory structure that is both readable and writable via
differentiable linear algebra operations. We explore three use cases of NAM:
memory-augmented neural network (MANN), few-shot learning, and efficient
long-range attention. First, we design two NAM-based MANNs of Long Short-term
Memory (LSAM) and NAM Turing Machine (NAM-TM) that show better computational
powers in algorithmic zero-shot generalization tasks compared to other
baselines such as differentiable neural computer (DNC). Next, we apply NAM to
the N-way K-shot learning task and show that it is more effective at reducing
false positives compared to the baseline cosine classifier. Finally, we
implement an efficient Transformer with NAM and evaluate it with long-range
arena tasks to show that NAM can be an efficient and effective alternative for
scaled dot-product attention.Comment: Preprint. Under revie