1 research outputs found
A New Accelerated Stochastic Gradient Method with Momentum
In this paper, we propose a novel accelerated stochastic gradient method with
momentum, which momentum is the weighted average of previous gradients. The
weights decays inverse proportionally with the iteration times. Stochastic
gradient descent with momentum (Sgdm) use weights that decays exponentially
with the iteration times to generate an momentum term. Using exponentially
decaying weights, variants of Sgdm with well designed and complicated formats
have been proposed to achieve better performance. The momentum update rules of
our method is as simple as that of Sgdm. We provide theoretical convergence
properties analyses for our method, which show both the exponentially decay
weights and our inverse proportionally decay weights can limit the variance of
the moving direction of parameters to be optimized to a region. Experimental
results empirically show that our method works well with practical problems and
outperforms Sgdm, and it outperforms Adam in convolutional neural networks.Comment: 10 pages, 6 figure