39,293 research outputs found
It Ain't That Bad: Understanding the Mysterious Performance Drop in OOD Generalization for Generative Transformer Models
Generative Transformer-based models have achieved remarkable proficiency on
solving diverse problems. However, their generalization ability is not fully
understood and not always satisfying. Researchers take basic mathematical tasks
like n-digit addition or multiplication as important perspectives for
investigating their generalization behaviors. Curiously, it is observed that
when training on n-digit operations (e.g., additions) in which both input
operands are n-digit in length, models generalize successfully on unseen
n-digit inputs (in-distribution (ID) generalization), but fail miserably and
mysteriously on longer, unseen cases (out-of-distribution (OOD)
generalization). Studies try to bridge this gap with workarounds such as
modifying position embedding, fine-tuning, and priming with more extensive or
instructive data. However, without addressing the essential mechanism, there is
hardly any guarantee regarding the robustness of these solutions. We bring this
unexplained performance drop into attention and ask whether it is purely from
random errors. Here we turn to the mechanistic line of research which has
notable successes in model interpretability. We discover that the strong ID
generalization stems from structured representations, while behind the
unsatisfying OOD performance, the models still exhibit clear learned algebraic
structures. Specifically, these models map unseen OOD inputs to outputs with
equivalence relations in the ID domain. These highlight the potential of the
models to carry useful information for improved generalization
Self-Play and Self-Describe: Policy Adaptation with Vision-Language Foundation Models
Recent progress on vision-language foundation models have brought significant
advancement to building general-purpose robots. By using the pre-trained models
to encode the scene and instructions as inputs for decision making, the
instruction-conditioned policy can generalize across different objects and
tasks. While this is encouraging, the policy still fails in most cases given an
unseen task or environment. To adapt the policy to unseen tasks and
environments, we explore a new paradigm on leveraging the pre-trained
foundation models with Self-PLAY and Self-Describe (SPLAYD). When deploying the
trained policy to a new task or a new environment, we first let the policy
self-play with randomly generated instructions to record the demonstrations.
While the execution could be wrong, we can use the pre-trained foundation
models to accurately self-describe (i.e., re-label or classify) the
demonstrations. This automatically provides new pairs of
demonstration-instruction data for policy fine-tuning. We evaluate our method
on a broad range of experiments with the focus on generalization on unseen
objects, unseen tasks, unseen environments, and sim-to-real transfer. We show
SPLAYD improves baselines by a large margin in all cases. Our project page is
available at https://geyuying.github.io/SPLAYD/Comment: Project page: https://geyuying.github.io/SPLAYD
A Meta-Learning Approach for Custom Model Training
Transfer-learning and meta-learning are two effective methods to apply
knowledge learned from large data sources to new tasks. In few-class, few-shot
target task settings (i.e. when there are only a few classes and training
examples available in the target task), meta-learning approaches that optimize
for future task learning have outperformed the typical transfer approach of
initializing model weights from a pre-trained starting point. But as we
experimentally show, meta-learning algorithms that work well in the few-class
setting do not generalize well in many-shot and many-class cases. In this
paper, we propose a joint training approach that combines both
transfer-learning and meta-learning. Benefiting from the advantages of each,
our method obtains improved generalization performance on unseen target tasks
in both few- and many-class and few- and many-shot scenarios.Comment: AAAI 201
Modeling Target-Side Inflection in Neural Machine Translation
NMT systems have problems with large vocabulary sizes. Byte-pair encoding
(BPE) is a popular approach to solving this problem, but while BPE allows the
system to generate any target-side word, it does not enable effective
generalization over the rich vocabulary in morphologically rich languages with
strong inflectional phenomena. We introduce a simple approach to overcome this
problem by training a system to produce the lemma of a word and its
morphologically rich POS tag, which is then followed by a deterministic
generation step. We apply this strategy for English-Czech and English-German
translation scenarios, obtaining improvements in both settings. We furthermore
show that the improvement is not due to only adding explicit morphological
information.Comment: Accepted as a research paper at WMT17. (Updated version with
corrected references.
- …