We propose a Randomized Progressive Training algorithm (RPT) -- a stochastic
proxy for the well-known Progressive Training method (PT) (Karras et al.,
2017). Originally designed to train GANs (Goodfellow et al., 2014), PT was
proposed as a heuristic, with no convergence analysis even for the simplest
objective functions. On the contrary, to the best of our knowledge, RPT is the
first PT-type algorithm with rigorous and sound theoretical guarantees for
general smooth objective functions. We cast our method into the established
framework of Randomized Coordinate Descent (RCD) (Nesterov, 2012; Richt\'arik &
Tak\'a\v{c}, 2014), for which (as a by-product of our investigations) we also
propose a novel, simple and general convergence analysis encapsulating
strongly-convex, convex and nonconvex objectives. We then use this framework to
establish a convergence theory for RPT. Finally, we validate the effectiveness
of our method through extensive computational experiments