14 research outputs found
Learning Causal State Representations of Partially Observable Environments
Intelligent agents can cope with sensory-rich environments by learning task-agnostic state abstractions. In this paper, we propose mechanisms to approximate causal states, which optimally compress the joint history of actions and observations in partially-observable Markov decision processes. Our proposed algorithm extracts causal state representations from RNNs that are trained to predict subsequent observations given the history. We demonstrate that these learned task-agnostic state abstractions can be used to efficiently learn policies for reinforcement learning problems with rich observation spaces. We evaluate agents using multiple partially observable navigation tasks with both discrete (GridWorld) and continuous (VizDoom, ALE) observation processes that cannot be solved by traditional memory-limited methods. Our experiments demonstrate systematic improvement of the DQN and tabular models using approximate causal state representations with respect to recurrent-DQN baselines trained with raw inputs
Learning Causal State Representations of Partially Observable Environments
Intelligent agents can cope with sensory-rich environments by learning task-agnostic state abstractions. In this paper, we propose mechanisms to approximate causal states, which optimally compress the joint history of actions and observations in partially-observable Markov decision processes. Our proposed algorithm extracts causal state representations from RNNs that are trained to predict subsequent observations given the history. We demonstrate that these learned task-agnostic state abstractions can be used to efficiently learn policies for reinforcement learning problems with rich observation spaces. We evaluate agents using multiple partially observable navigation tasks with both discrete (GridWorld) and continuous (VizDoom, ALE) observation processes that cannot be solved by traditional memory-limited methods. Our experiments demonstrate systematic improvement of the DQN and tabular models using approximate causal state representations with respect to recurrent-DQN baselines trained with raw inputs
Grounding Aleatoric Uncertainty for Unsupervised Environment Design
Adaptive curricula in reinforcement learning (RL) have proven effective for producing policies robust to discrepancies between the train and test environment. Recently, the Unsupervised Environment Design (UED) framework generalized RL curricula to generating sequences of entire environments, leading to new methods with robust minimax regret properties. Problematically, in partially-observable or stochastic settings, optimal policies may depend on the ground-truth distribution over aleatoric parameters of the environment in the intended deployment setting, while curriculum learning necessarily shifts the training distribution. We formalize this phenomenon as curriculum-induced covariate shift (CICS), and describe how its occurrence in aleatoric parameters can lead to suboptimal policies. Directly sampling these parameters from the ground-truth distribution avoids the issue, but thwarts curriculum learning. We propose SAMPLR, a minimax regret UED method that optimizes the ground-truth utility function, even when the underlying training data is biased due to CICS. We prove, and validate on challenging domains, that our approach preserves optimality under the ground-truth distribution, while promoting robustness across the full range of environment settings
Learning by Doing: An Online Causal Reinforcement Learning Framework with Causal-Aware Policy
As a key component to intuitive cognition and reasoning solutions in human
intelligence, causal knowledge provides great potential for reinforcement
learning (RL) agents' interpretability towards decision-making by helping
reduce the searching space. However, there is still a considerable gap in
discovering and incorporating causality into RL, which hinders the rapid
development of causal RL. In this paper, we consider explicitly modeling the
generation process of states with the causal graphical model, based on which we
augment the policy. We formulate the causal structure updating into the RL
interaction process with active intervention learning of the environment. To
optimize the derived objective, we propose a framework with theoretical
performance guarantees that alternates between two steps: using interventions
for causal structure learning during exploration and using the learned causal
structure for policy guidance during exploitation. Due to the lack of public
benchmarks that allow direct intervention in the state space, we design the
root cause localization task in our simulated fault alarm environment and then
empirically show the effectiveness and robustness of the proposed method
against state-of-the-art baselines. Theoretical analysis shows that our
performance improvement attributes to the virtuous cycle of causal-guided
policy learning and causal structure learning, which aligns with our
experimental results
Grounding Aleatoric Uncertainty in Unsupervised Environment Design
Adaptive curricula in reinforcement learning (RL) have proven effective for
producing policies robust to discrepancies between the train and test
environment. Recently, the Unsupervised Environment Design (UED) framework
generalized RL curricula to generating sequences of entire environments,
leading to new methods with robust minimax regret properties. Problematically,
in partially-observable or stochastic settings, optimal policies may depend on
the ground-truth distribution over aleatoric parameters of the environment in
the intended deployment setting, while curriculum learning necessarily shifts
the training distribution. We formalize this phenomenon as curriculum-induced
covariate shift (CICS), and describe how its occurrence in aleatoric parameters
can lead to suboptimal policies. Directly sampling these parameters from the
ground-truth distribution avoids the issue, but thwarts curriculum learning. We
propose SAMPLR, a minimax regret UED method that optimizes the ground-truth
utility function, even when the underlying training data is biased due to CICS.
We prove, and validate on challenging domains, that our approach preserves
optimality under the ground-truth distribution, while promoting robustness
across the full range of environment settings