When applying aggregating strategies to Prediction with Expert Advice, the
learning rate must be adaptively tuned. The natural choice of
sqrt(complexity/current loss) renders the analysis of Weighted Majority
derivatives quite complicated. In particular, for arbitrary weights there have
been no results proven so far. The analysis of the alternative "Follow the
Perturbed Leader" (FPL) algorithm from Kalai & Vempala (2003) (based on
Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate
and both finite expert classes with uniform weights and countable expert
classes with arbitrary weights. For the former setup, our loss bounds match the
best known results so far, while for the latter our results are new.Comment: 25 page