26,283 research outputs found
Modular Networks: Learning to Decompose Neural Computation
Scaling model capacity has been vital in the success of deep learning. For a
typical network, necessary compute resources and training time grow
dramatically with model size. Conditional computation is a promising way to
increase the number of parameters with a relatively small increase in
resources. We propose a training algorithm that flexibly chooses neural modules
based on the data to be processed. Both the decomposition and modules are
learned end-to-end. In contrast to existing approaches, training does not rely
on regularization to enforce diversity in module use. We apply modular networks
both to image recognition and language modeling tasks, where we achieve
superior performance compared to several baselines. Introspection reveals that
modules specialize in interpretable contexts.Comment: NIPS 201
- …