As Reinforcement Learning (RL) agents are increasingly employed in diverse
decision-making problems using reward preferences, it becomes important to
ensure that policies learned by these frameworks in mapping observations to a
probability distribution of the possible actions are explainable. However,
there is little to no work in the systematic understanding of these complex
policies in a contrastive manner, i.e., what minimal changes to the policy
would improve/worsen its performance to a desired level. In this work, we
present COUNTERPOL, the first framework to analyze RL policies using
counterfactual explanations in the form of minimal changes to the policy that
lead to the desired outcome. We do so by incorporating counterfactuals in
supervised learning in RL with the target outcome regulated using desired
return. We establish a theoretical connection between Counterpol and widely
used trust region-based policy optimization methods in RL. Extensive empirical
analysis shows the efficacy of COUNTERPOL in generating explanations for
(un)learning skills while keeping close to the original policy. Our results on
five different RL environments with diverse state and action spaces demonstrate
the utility of counterfactual explanations, paving the way for new frontiers in
designing and developing counterfactual policies.Comment: ICML Workshop on Counterfactuals in Minds and Machines, 202