40 research outputs found
Pre-trained Word Embeddings for Goal-conditional Transfer Learning in Reinforcement Learning
Reinforcement learning (RL) algorithms typically start tabula rasa, without
any prior knowledge of the environment, and without any prior skills. This
however often leads to low sample efficiency, requiring a large amount of
interaction with the environment. This is especially true in a lifelong
learning setting, in which the agent needs to continually extend its
capabilities. In this paper, we examine how a pre-trained task-independent
language model can make a goal-conditional RL agent more sample efficient. We
do this by facilitating transfer learning between different related tasks. We
experimentally demonstrate our approach on a set of object navigation tasks.Comment: Paper accepted to the ICML 2020 Language in Reinforcement Learning
(LaReL) Worksho
Deep Sets for Generalization in RL
This paper investigates the idea of encoding object-centered representations
in the design of the reward function and policy architectures of a
language-guided reinforcement learning agent. This is done using a combination
of object-wise permutation invariant networks inspired from Deep Sets and
gated-attention mechanisms. In a 2D procedurally-generated world where agents
targeting goals in natural language navigate and interact with objects, we show
that these architectures demonstrate strong generalization capacities to
out-of-distribution goals. We study the generalization to varying numbers of
objects at test time and further extend the object-centered architectures to
goals involving relational reasoning.Comment: 15 pages, 10 figures, published as a workshop Paper at ICLR: Beyond
tabula rasa in RL (BeTR-RL). arXiv admin note: substantial text overlap with
arXiv:2002.0925
RRHF: Rank Responses to Align Language Models with Human Feedback without tears
Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment
of large language models with human preferences, significantly enhancing the
quality of interactions between humans and these models. InstructGPT implements
RLHF through several stages, including Supervised Fine-Tuning (SFT), reward
model training, and Proximal Policy Optimization (PPO). PPO, however, is
sensitive to hyperparameters and requires a minimum of four models in its
standard implementation, which makes it hard to train. In contrast, we propose
a novel learning paradigm called RRHF, which scores responses generated by
different sampling policies and learns to align them with human preferences
through ranking loss. RRHF can efficiently align language model output
probabilities with human preferences as robust as fine-tuning and it only needs
1 to 2 models during tuning. In addition, RRHF can be considered an extension
of SFT and reward models while being simpler than PPO in terms of coding, model
counts, and hyperparameters. The entire alignment process can be accomplished
within a single RRHF training session. We evaluate RRHF using LLaMA and Alpaca
on Helpful and Harmless data, demonstrating performance comparable to PPO.Comment: Codes available at https://github.com/GanjinZero/RRH
Specifying and Interpreting Reinforcement Learning Policies through Simulatable Machine Learning
Human-AI collaborative policy synthesis is a procedure in which (1) a human
initializes an autonomous agent's behavior, (2) Reinforcement Learning improves
the human specified behavior, and (3) the agent can explain the final optimized
policy to the user. This paradigm leverages human expertise and facilitates a
greater insight into the learned behaviors of an agent. Existing approaches to
enabling collaborative policy specification involve black box methods which are
unintelligible and are not catered towards non-expert end-users. In this paper,
we develop a novel collaborative framework to enable humans to initialize and
interpret an autonomous agent's behavior, rooted in principles of
human-centered design. Through our framework, we enable humans to specify an
initial behavior model in the form of unstructured, natural language, which we
then convert to lexical decision trees. Next, we are able to leverage these
human-specified policies, to warm-start reinforcement learning and further
allow the agent to optimize the policies through reinforcement learning.
Finally, to close the loop on human-specification, we produce explanations of
the final learned policy, in multiple modalities, to provide the user a final
depiction about the learned policy of the agent. We validate our approach by
showing that our model can produce >80% accuracy, and that human-initialized
policies are able to successfully warm-start RL. We then conduct a novel
human-subjects study quantifying the relative subjective and objective benefits
of varying XAI modalities(e.g., Tree, Language, and Program) for explaining
learned policies to end-users, in terms of usability and interpretability and
identify the circumstances that influence these measures. Our findings
emphasize the need for personalized explainable systems that can facilitate
user-centric policy explanations for a variety of end-users