Batch normalization (BN) has become a crucial component across diverse deep
neural networks. The network with BN is invariant to positively linear
re-scaling of weights, which makes there exist infinite functionally equivalent
networks with various scales of weights. However, optimizing these equivalent
networks with the first-order method such as stochastic gradient descent will
converge to different local optima owing to different gradients across
training. To alleviate this, we propose a quotient manifold \emph{PSI
manifold}, in which all the equivalent weights of the network with BN are
regarded as the same one element. Then, gradient descent and stochastic
gradient descent on the PSI manifold are also constructed. The two algorithms
guarantee that every group of equivalent weights (caused by positively
re-scaling) converge to the equivalent optima. Besides that, we give the
convergence rate of the proposed algorithms on PSI manifold and justify that
they accelerate training compared with the algorithms on the Euclidean weight
space. Empirical studies show that our algorithms can consistently achieve
better performances over various experimental settings