Reinforcement learning policy evaluation problems are often modeled as finite
or discounted/averaged infinite-horizon MDPs. In this paper, we study
undiscounted off-policy policy evaluation for absorbing MDPs. Given the dataset
consisting of the i.i.d episodes with a given truncation level, we propose a
so-called MWLA algorithm to directly estimate the expected return via the
importance ratio of the state-action occupancy measure. The Mean Square Error
(MSE) bound for the MWLA method is investigated and the dependence of
statistical errors on the data size and the truncation level are analyzed. With
an episodic taxi environment, computational experiments illustrate the
performance of the MWLA algorithm.Comment: 36 pages, 9 figure