Sparsely activated neural networks with conditional computation learn to
route their inputs through different "expert" subnetworks, providing a form of
modularity that densely activated models lack. Despite their possible benefits,
models with learned routing often underperform their parameter-matched densely
activated counterparts as well as models that use non-learned heuristic routing
strategies. In this paper, we hypothesize that these shortcomings stem from the
gradient estimation techniques used to train sparsely activated models that use
non-differentiable discrete routing decisions. To address this issue, we
introduce Soft Merging of Experts with Adaptive Routing (SMEAR), which avoids
discrete routing by using a single "merged" expert constructed via a weighted
average of all of the experts' parameters. By routing activations through a
single merged expert, SMEAR does not incur a significant increase in
computational costs and enables standard gradient-based training. We
empirically validate that models using SMEAR outperform models that route based
on metadata or learn sparse routing through gradient estimation. Furthermore,
we provide qualitative analysis demonstrating that the experts learned via
SMEAR exhibit a significant amount of specialization. All of the code used in
our experiments is publicly available