Hypernetworks, neural networks that predict the parameters of another neural
network, are powerful models that have been successfully used in diverse
applications from image generation to multi-task learning. Unfortunately,
existing hypernetworks are often challenging to train. Training typically
converges far more slowly than for non-hypernetwork models, and the rate of
convergence can be very sensitive to hyperparameter choices. In this work, we
identify a fundamental and previously unidentified problem that contributes to
the challenge of training hypernetworks: a magnitude proportionality between
the inputs and outputs of the hypernetwork. We demonstrate both analytically
and empirically that this can lead to unstable optimization, thereby slowing
down convergence, and sometimes even preventing any learning. We present a
simple solution to this problem using a revised hypernetwork formulation that
we call Magnitude Invariant Parametrizations (MIP). We demonstrate the proposed
solution on several hypernetwork tasks, where it consistently stabilizes
training and achieves faster convergence. Furthermore, we perform a
comprehensive ablation study including choices of activation function,
normalization strategies, input dimensionality, and hypernetwork architecture;
and find that MIP improves training in all scenarios. We provide easy-to-use
code that can turn existing networks into MIP-based hypernetworks.Comment: Source code at https://github.com/JJGO/hyperligh