We address the fundamental limits of learning unknown parameters of any
stochastic process from time-series data, and discover exact closed-form
expressions for how optimal inference scales with observation length. Given a
parametrized class of candidate models, the Fisher information of observed
sequence probabilities lower-bounds the variance in model estimation from
finite data. As sequence-length increases, the minimal variance scales as the
square inverse of the length -- with constant coefficient given by the
information rate. We discover a simple closed-form expression for this
information rate, even in the case of infinite Markov order. We furthermore
obtain the exact analytic lower bound on model variance from the
observation-induced metadynamic among belief states. We discover ephemeral,
exponential, and more general modes of convergence to the asymptotic
information rate. Surprisingly, this myopic information rate converges to the
asymptotic Fisher information rate with exactly the same relaxation timescales
that appear in the myopic entropy rate as it converges to the Shannon entropy
rate for the process. We illustrate these results with a sequence of examples
that highlight qualitatively distinct features of stochastic processes that
shape optimal learning