We consider an agent interacting with an environment in a single stream of
actions, observations, and rewards, with no reset. This process is not assumed
to be a Markov Decision Process (MDP). Rather, the agent has several
representations (mapping histories of past interactions to a discrete state
space) of the environment with unknown dynamics, only some of which result in
an MDP. The goal is to minimize the average regret criterion against an agent
who knows an MDP representation giving the highest optimal reward, and acts
optimally in it. Recent regret bounds for this setting are of order
O(T2/3) with an additive term constant yet exponential in some
characteristics of the optimal MDP. We propose an algorithm whose regret after
T time steps is O(T), with all constants reasonably small. This is
optimal in T since O(T) is the optimal regret in the setting of
learning in a (single discrete) MDP