We study two factors in neural network training: data parallelism and
sparsity; here, data parallelism means processing training data in parallel
using distributed systems (or equivalently increasing batch size), so that
training can be accelerated; for sparsity, we refer to pruning parameters in a
neural network model, so as to reduce computational and memory cost. Despite
their promising benefits, however, understanding of their effects on neural
network training remains elusive. In this work, we first measure these effects
rigorously by conducting extensive experiments while tuning all metaparameters
involved in the optimization. As a result, we find across various workloads of
data set, network model, and optimization algorithm that there exists a general
scaling trend between batch size and number of training steps to convergence
for the effect of data parallelism, and further, difficulty of training under
sparsity. Then, we develop a theoretical analysis based on the convergence
properties of stochastic gradient methods and smoothness of the optimization
landscape, which illustrates the observed phenomena precisely and generally,
establishing a better account of the effects of data parallelism and sparsity
on neural network training.Comment: ICLR 202