2 research outputs found
Hindsight-DICE: Stable Credit Assignment for Deep Reinforcement Learning
Oftentimes, environments for sequential decision-making problems can be quite
sparse in the provision of evaluative feedback to guide reinforcement-learning
agents. In the extreme case, long trajectories of behavior are merely
punctuated with a single terminal feedback signal, engendering a significant
temporal delay between the observation of non-trivial reward and the individual
steps of behavior culpable for eliciting such feedback. Coping with such a
credit assignment challenge is one of the hallmark characteristics of
reinforcement learning and, in this work, we capitalize on existing
importance-sampling ratio estimation techniques for off-policy evaluation to
drastically improve the handling of credit assignment with policy-gradient
methods. While the use of so-called hindsight policies offers a principled
mechanism for reweighting on-policy data by saliency to the observed trajectory
return, naively applying importance sampling results in unstable or excessively
lagged learning. In contrast, our hindsight distribution correction facilitates
stable, efficient learning across a broad range of environments where credit
assignment plagues baseline methods
Differentiable Weight Masks for Domain Transfer
One of the major drawbacks of deep learning models for computer vision has
been their inability to retain multiple sources of information in a modular
fashion. For instance, given a network that has been trained on a source task,
we would like to re-train this network on a similar, yet different, target task
while maintaining its performance on the source task. Simultaneously,
researchers have extensively studied modularization of network weights to
localize and identify the set of weights culpable for eliciting the observed
performance on a given task. One set of works studies the modularization
induced in the weights of a neural network by learning and analysing weight
masks. In this work, we combine these fields to study three such weight masking
methods and analyse their ability to mitigate "forgetting'' on the source task
while also allowing for efficient finetuning on the target task. We find that
different masking techniques have trade-offs in retaining knowledge in the
source task without adversely affecting target task performance.Comment: Published in Out of Distribution Generalization in Computer Vision
(OOD-CV) workshop at ICCV 202