1 research outputs found
Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning
Exogenous state variables and rewards can slow down reinforcement learning by
injecting uncontrolled variation into the reward signal. We formalize exogenous
state variables and rewards and identify conditions under which an MDP with
exogenous state can be decomposed into an exogenous Markov Reward Process
involving only the exogenous state+reward and an endogenous Markov Decision
Process defined with respect to only the endogenous rewards. We also derive a
variance-covariance condition under which Monte Carlo policy evaluation on the
endogenous MDP is accelerated compared to using the full MDP. Similar speedups
are likely to carry over to all RL algorithms. We develop two algorithms for
discovering the exogenous variables and test them on several MDPs. Results show
that the algorithms are practical and can significantly speed up reinforcement
learning.Comment: To appear at ICML 201