8 research outputs found
Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation
We develop an approach to efficiently grow neural networks, within which
parameterization and optimization strategies are designed by considering their
effects on the training dynamics. Unlike existing growing methods, which follow
simple replication heuristics or utilize auxiliary gradient-based local
optimization, we craft a parameterization scheme which dynamically stabilizes
weight, activation, and gradient scaling as the architecture evolves, and
maintains the inference functionality of the network. To address the
optimization difficulty resulting from imbalanced training effort distributed
to subnetworks fading in at different growth phases, we propose a learning rate
adaption mechanism that rebalances the gradient contribution of these separate
subcomponents. Experimental results show that our method achieves comparable or
better accuracy than training large fixed-size models, while saving a
substantial portion of the original computation budget for training. We
demonstrate that these gains translate into real wall-clock training speedups