Reinforcement Learning aims at identifying and evaluating efficient control
policies from data. In many real-world applications, the learner is not allowed
to experiment and cannot gather data in an online manner (this is the case when
experimenting is expensive, risky or unethical). For such applications, the
reward of a given policy (the target policy) must be estimated using historical
data gathered under a different policy (the behavior policy). Most methods for
this learning task, referred to as Off-Policy Evaluation (OPE), do not come
with accuracy and certainty guarantees. We present a novel OPE method based on
Conformal Prediction that outputs an interval containing the true reward of the
target policy with a prescribed level of certainty. The main challenge in OPE
stems from the distribution shift due to the discrepancies between the target
and the behavior policies. We propose and empirically evaluate different ways
to deal with this shift. Some of these methods yield conformalized intervals
with reduced length compared to existing approaches, while maintaining the same
certainty level