Adam is a commonly used stochastic optimization algorithm in machine
learning. However, its convergence is still not fully understood, especially in
the non-convex setting. This paper focuses on exploring hyperparameter settings
for the convergence of vanilla Adam and tackling the challenges of non-ergodic
convergence related to practical application. The primary contributions are
summarized as follows: firstly, we introduce precise definitions of ergodic and
non-ergodic convergence, which cover nearly all forms of convergence for
stochastic optimization algorithms. Meanwhile, we emphasize the superiority of
non-ergodic convergence over ergodic convergence. Secondly, we establish a
weaker sufficient condition for the ergodic convergence guarantee of Adam,
allowing a more relaxed choice of hyperparameters. On this basis, we achieve
the almost sure ergodic convergence rate of Adam, which is arbitrarily close to
o(1/K​). More importantly, we prove, for the first time, that the last
iterate of Adam converges to a stationary point for non-convex objectives.
Finally, we obtain the non-ergodic convergence rate of O(1/K) for function
values under the Polyak-Lojasiewicz (PL) condition. These findings build a
solid theoretical foundation for Adam to solve non-convex stochastic
optimization problems