18 research outputs found
Learn to Interpret Atari Agents
Deep Reinforcement Learning (DeepRL) agents surpass human-level performances
in a multitude of tasks. However, the direct mapping from states to actions
makes it hard to interpret the rationale behind the decision making of agents.
In contrast to previous a-posteriori methods of visualizing DeepRL policies, we
propose an end-to-end trainable framework based on Rainbow, a representative
Deep Q-Network (DQN) agent. Our method automatically learns important regions
in the input domain, which enables characterizations of the decision making and
interpretations for non-intuitive behaviors. Hence we name it Region Sensitive
Rainbow (RS-Rainbow). RS-Rainbow utilizes a simple yet effective mechanism to
incorporate visualization ability into the learning model, not only improving
model interpretability, but leading to improved performance. Extensive
experiments on the challenging platform of Atari 2600 demonstrate the
superiority of RS-Rainbow. In particular, our agent achieves state of the art
at just 25% of the training frames. Demonstrations and code are available at
https://github.com/yz93/Learn-to-Interpret-Atari-Agents
Generation of Policy-Level Explanations for Reinforcement Learning
Though reinforcement learning has greatly benefited from the incorporation of
neural networks, the inability to verify the correctness of such systems limits
their use. Current work in explainable deep learning focuses on explaining only
a single decision in terms of input features, making it unsuitable for
explaining a sequence of decisions. To address this need, we introduce
Abstracted Policy Graphs, which are Markov chains of abstract states. This
representation concisely summarizes a policy so that individual decisions can
be explained in the context of expected future transitions. Additionally, we
propose a method to generate these Abstracted Policy Graphs for deterministic
policies given a learned value function and a set of observed transitions,
potentially off-policy transitions used during training. Since no restrictions
are placed on how the value function is generated, our method is compatible
with many existing reinforcement learning methods. We prove that the worst-case
time complexity of our method is quadratic in the number of features and linear
in the number of provided transitions, . By applying
our method to a family of domains, we show that our method scales well in
practice and produces Abstracted Policy Graphs which reliably capture
relationships within these domains.Comment: Accepted to Proceedings of the Thirty-Third AAAI Conference on
Artificial Intelligence (2019
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
With advances in reinforcement learning (RL), agents are now being developed
in high-stakes application domains such as healthcare and transportation.
Explaining the behavior of these agents is challenging, as the environments in
which they act have large state spaces, and their decision-making can be
affected by delayed rewards, making it difficult to analyze their behavior. To
address this problem, several approaches have been developed. Some approaches
attempt to convey the behavior of the agent, describing the
actions it takes in different states. Other approaches devised
explanations which provide information regarding the agent's decision-making in
a particular state. In this paper, we combine global and local explanation
methods, and evaluate their joint and separate contributions, providing (to the
best of our knowledge) the first user study of combined local and global
explanations for RL agents. Specifically, we augment strategy summaries that
extract important trajectories of states from simulations of the agent with
saliency maps which show what information the agent attends to. Our results
show that the choice of what states to include in the summary (global
information) strongly affects people's understanding of agents: participants
shown summaries that included important states significantly outperformed
participants who were presented with agent behavior in a randomly set of chosen
world-states. We find mixed results with respect to augmenting demonstrations
with saliency maps (local information), as the addition of saliency maps did
not significantly improve performance in most cases. However, we do find some
evidence that saliency maps can help users better understand what information
the agent relies on in its decision making, suggesting avenues for future work
that can further improve explanations of RL agents