Large and sparse feed-forward layers (S-FFN) such as Mixture-of-Experts (MoE)
have proven effective in scaling up Transformers model size for
\textit{pretraining} large language models. By only activating part of the FFN
parameters conditioning on input, S-FFN improves generalization performance
while keeping training and inference costs (in FLOPs) fixed. In this work, we
analyzed two major design choices of S-FFN: the memory block (a.k.a. expert)
size and the memory block selection method under a general conceptual framework
of sparse neural memory. Using this unified framework, we compare several S-FFN
architectures for language modeling and provide insights into their relative
efficacy and efficiency. We found a simpler selection method --
\textbf{\texttt{Avg-K}} that selects blocks through their mean aggregated
hidden states, achieving lower perplexity in language model pretraining
compared to existing MoE architectures including Switch Transformer (Fedus et
al., 2021) and HashLayer (Roller et al., 2021).Comment: Accepted to EMNLP 202