The asymptotic behavior of the stochastic gradient algorithm with a biased
gradient estimator is analyzed. Relying on arguments based on the dynamic
system theory (chain-recurrence) and the differential geometry (Yomdin theorem
and Lojasiewicz inequality), tight bounds on the asymptotic bias of the
iterates generated by such an algorithm are derived. The obtained results hold
under mild conditions and cover a broad class of high-dimensional nonlinear
algorithms. Using these results, the asymptotic properties of the
policy-gradient (reinforcement) learning and adaptive population Monte Carlo
sampling are studied. Relying on the same results, the asymptotic behavior of
the recursive maximum split-likelihood estimation in hidden Markov models is
analyzed, too.Comment: arXiv admin note: text overlap with arXiv:0907.102