10,129 research outputs found
Causal Confusion in Imitation Learning
Behavioral cloning reduces policy learning to supervised learning by training
a discriminative model to predict expert actions given observations. Such
discriminative models are non-causal: the training procedure is unaware of the
causal structure of the interaction between the expert and the environment. We
point out that ignoring causality is particularly damaging because of the
distributional shift in imitation learning. In particular, it leads to a
counter-intuitive "causal misidentification" phenomenon: access to more
information can yield worse performance. We investigate how this problem
arises, and propose a solution to combat it through targeted
interventions---either environment interaction or expert queries---to determine
the correct causal model. We show that causal misidentification occurs in
several benchmark control domains as well as realistic driving settings, and
validate our solution against DAgger and other baselines and ablations.Comment: Published at NeurIPS 2019 9 pages, plus references and appendice
Deep reinforcement learning from human preferences
For sophisticated reinforcement learning (RL) systems to interact usefully
with real-world environments, we need to communicate complex goals to these
systems. In this work, we explore goals defined in terms of (non-expert) human
preferences between pairs of trajectory segments. We show that this approach
can effectively solve complex RL tasks without access to the reward function,
including Atari games and simulated robot locomotion, while providing feedback
on less than one percent of our agent's interactions with the environment. This
reduces the cost of human oversight far enough that it can be practically
applied to state-of-the-art RL systems. To demonstrate the flexibility of our
approach, we show that we can successfully train complex novel behaviors with
about an hour of human time. These behaviors and environments are considerably
more complex than any that have been previously learned from human feedback
Large-Margin Determinantal Point Processes
Determinantal point processes (DPPs) offer a powerful approach to modeling
diversity in many applications where the goal is to select a diverse subset. We
study the problem of learning the parameters (the kernel matrix) of a DPP from
labeled training data. We make two contributions. First, we show how to
reparameterize a DPP's kernel matrix with multiple kernel functions, thus
enhancing modeling flexibility. Second, we propose a novel parameter estimation
technique based on the principle of large margin separation. In contrast to the
state-of-the-art method of maximum likelihood estimation, our large-margin loss
function explicitly models errors in selecting the target subsets, and it can
be customized to trade off different types of errors (precision vs. recall).
Extensive empirical studies validate our contributions, including applications
on challenging document and video summarization, where flexibility in modeling
the kernel matrix and balancing different errors is indispensable.Comment: 15 page
Modeling Interdependent and Periodic Real-World Action Sequences
Mobile health applications, including those that track activities such as
exercise, sleep, and diet, are becoming widely used. Accurately predicting
human actions is essential for targeted recommendations that could improve our
health and for personalization of these applications. However, making such
predictions is extremely difficult due to the complexities of human behavior,
which consists of a large number of potential actions that vary over time,
depend on each other, and are periodic. Previous work has not jointly modeled
these dynamics and has largely focused on item consumption patterns instead of
broader types of behaviors such as eating, commuting or exercising. In this
work, we develop a novel statistical model for Time-varying, Interdependent,
and Periodic Action Sequences. Our approach is based on personalized,
multivariate temporal point processes that model time-varying action
propensities through a mixture of Gaussian intensities. Our model captures
short-term and long-term periodic interdependencies between actions through
Hawkes process-based self-excitations. We evaluate our approach on two activity
logging datasets comprising 12 million actions taken by 20 thousand users over
17 months. We demonstrate that our approach allows us to make successful
predictions of future user actions and their timing. Specifically, our model
improves predictions of actions, and their timing, over existing methods across
multiple datasets by up to 156%, and up to 37%, respectively. Performance
improvements are particularly large for relatively rare and periodic actions
such as walking and biking, improving over baselines by up to 256%. This
demonstrates that explicit modeling of dependencies and periodicities in
real-world behavior enables successful predictions of future actions, with
implications for modeling human behavior, app personalization, and targeting of
health interventions.Comment: Accepted at WWW 201
Proposal-free Temporal Moment Localization of a Natural-Language Query in Video using Guided Attention
This paper studies the problem of temporal moment localization in a long
untrimmed video using natural language as the query. Given an untrimmed video
and a sentence as the query, the goal is to determine the starting, and the
ending, of the relevant visual moment in the video, that corresponds to the
query sentence. While previous works have tackled this task by a
propose-and-rank approach, we introduce a more efficient, end-to-end trainable,
and {\em proposal-free approach} that relies on three key components: a dynamic
filter to transfer language information to the visual domain, a new loss
function to guide our model to attend the most relevant parts of the video, and
soft labels to model annotation uncertainty. We evaluate our method on two
benchmark datasets, Charades-STA and ActivityNet-Captions. Experimental results
show that our approach outperforms state-of-the-art methods on both datasets.Comment: Winter Conference on Applications of Computer Vision 202
- …