2 research outputs found
Optimal prediction of Markov chains with and without spectral gap
We study the following learning problem with dependent data: Observing a
trajectory of length from a stationary Markov chain with states, the
goal is to predict the next state. For , using
techniques from universal compression, the optimal prediction risk in
Kullback-Leibler divergence is shown to be , in contrast to the optimal rate of for previously shown in Falahatgar et al., 2016. These rates,
slower than the parametric rate of , can be attributed to the
memory in the data, as the spectral gap of the Markov chain can be arbitrarily
small. To quantify the memory effect, we study irreducible reversible chains
with a prescribed spectral gap. In addition to characterizing the optimal
prediction risk for two states, we show that, as long as the spectral gap is
not excessively small, the prediction risk in the Markov model is
, which coincides with that of an iid model with the same
number of parameters.Comment: 52 page