In this paper we consider online mirror descent (OMD) algorithms, a class of
scalable online learning algorithms exploiting data geometric structures
through mirror maps. Necessary and sufficient conditions are presented in terms
of the step size sequence {Ξ·tβ}tβ for the convergence of an OMD
algorithm with respect to the expected Bregman distance induced by the mirror
map. The condition is limtβββΞ·tβ=0,βt=1ββΞ·tβ=β in the case of positive variances. It is
reduced to βt=1ββΞ·tβ=β in the case of zero variances for
which the linear convergence may be achieved by taking a constant step size
sequence. A sufficient condition on the almost sure convergence is also given.
We establish tight error bounds under mild conditions on the mirror map, the
loss function, and the regularizer. Our results are achieved by some novel
analysis on the one-step progress of the OMD algorithm using smoothness and
strong convexity of the mirror map and the loss function.Comment: Published in Applied and Computational Harmonic Analysis, 202