In this paper, we consider the online proximal mirror descent for solving the
time-varying composite optimization problems. For various applications, the
algorithm naturally involves the errors in the gradient and proximal operator.
We obtain sharp estimates on the dynamic regret of the algorithm when the
regular part of the cost is convex and smooth. If the Bregman distance is given
by the Euclidean distance, our result also improves the previous work in two
ways: (i) We establish a sharper regret bound compared to the previous work in
the sense that our estimate does not involve O(T) term appearing in that
work. (ii) We also obtain the result when the domain is the whole space
Rn, whereas the previous work was obtained only for bounded
domains. We also provide numerical tests for problems involving the errors in
the gradient and proximal operator.Comment: 16 pages, 5 figure