Learning Markov decision processes (MDP) in an adversarial environment has
been a challenging problem. The problem becomes even more challenging with
function approximation, since the underlying structure of the loss function and
transition kernel are especially hard to estimate in a varying environment. In
fact, the state-of-the-art results for linear adversarial MDP achieve a regret
of O~(K6/7) (K denotes the number of episodes), which admits a
large room for improvement. In this paper, we investigate the problem with a
new view, which reduces linear MDP into linear optimization by subtly setting
the feature maps of the bandit arms of linear optimization. This new technique,
under an exploratory assumption, yields an improved bound of
O~(K4/5) for linear adversarial MDP without access to a transition
simulator. The new view could be of independent interest for solving other MDP
problems that possess a linear structure