125 research outputs found
Universal Language Model Fine-tuning for Text Classification
Inductive transfer learning has greatly impacted computer vision, but
existing approaches in NLP still require task-specific modifications and
training from scratch. We propose Universal Language Model Fine-tuning
(ULMFiT), an effective transfer learning method that can be applied to any task
in NLP, and introduce techniques that are key for fine-tuning a language model.
Our method significantly outperforms the state-of-the-art on six text
classification tasks, reducing the error by 18-24% on the majority of datasets.
Furthermore, with only 100 labeled examples, it matches the performance of
training from scratch on 100x more data. We open-source our pretrained models
and code.Comment: ACL 2018, fixed denominator in Equation 3, line
Learning to select data for transfer learning with Bayesian Optimization
Domain similarity measures can be used to gauge adaptability and select
suitable data for transfer learning, but existing approaches define ad hoc
measures that are deemed suitable for respective tasks. Inspired by work on
curriculum learning, we propose to \emph{learn} data selection measures using
Bayesian Optimization and evaluate them across models, domains and tasks. Our
learned measures outperform existing domain similarity measures significantly
on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We
show the importance of complementing similarity with diversity, and that
learned measures are -- to some degree -- transferable across models, domains,
and even tasks.Comment: EMNLP 2017. Code available at:
https://github.com/sebastianruder/learn-to-select-dat
Multi-task Learning of Pairwise Sequence Classification Tasks Over Disparate Label Spaces
We combine multi-task learning and semi-supervised learning by inducing a
joint embedding space between disparate label spaces and learning transfer
functions between label embeddings, enabling us to jointly leverage unlabelled
data and auxiliary, annotated datasets. We evaluate our approach on a variety
of sequence classification tasks with disparate label spaces. We outperform
strong single and multi-task baselines and achieve a new state-of-the-art for
topic-based sentiment analysis.Comment: To appear at NAACL 2018 (long
Towards a continuous modeling of natural language domains
Humans continuously adapt their style and language to a variety of domains.
However, a reliable definition of `domain' has eluded researchers thus far.
Additionally, the notion of discrete domains stands in contrast to the
multiplicity of heterogeneous domains that humans navigate, many of which
overlap. In order to better understand the change and variation of human
language, we draw on research in domain adaptation and extend the notion of
discrete domains to the continuous spectrum. We propose representation
learning-based models that can adapt to continuous domains and detail how these
can be used to investigate variation in language. To this end, we propose to
use dialogue modeling as a test bed due to its proximity to language modeling
and its social component.Comment: 5 pages, 3 figures, published in Uphill Battles in Language
Processing workshop, EMNLP 201
Latent Multi-task Architecture Learning
Multi-task learning (MTL) allows deep neural networks to learn from related
tasks by sharing parameters with other networks. In practice, however, MTL
involves searching an enormous space of possible parameter sharing
architectures to find (a) the layers or subspaces that benefit from sharing,
(b) the appropriate amount of sharing, and (c) the appropriate relative weights
of the different task losses. Recent work has addressed each of the above
problems in isolation. In this work we present an approach that learns a latent
multi-task architecture that jointly addresses (a)--(c). We present experiments
on synthetic data and data from OntoNotes 5.0, including four different tasks
and seven different domains. Our extension consistently outperforms previous
approaches to learning latent architectures for multi-task problems and
achieves up to 15% average error reductions over common approaches to MTL.Comment: To appear in Proceedings of AAAI 201
Strong Baselines for Neural Semi-Supervised Learning under Domain Shift
Novel neural models have been proposed in recent years for learning under
domain shift. Most models, however, only evaluate on a single task, on
proprietary datasets, or compare to weak baselines, which makes comparison of
models difficult. In this paper, we re-evaluate classic general-purpose
bootstrapping approaches in the context of neural networks under domain shifts
vs. recent neural approaches and propose a novel multi-task tri-training method
that reduces the time and space complexity of classic tri-training. Extensive
experiments on two benchmarks are negative: while our novel method establishes
a new state-of-the-art for sentiment analysis, it does not fare consistently
the best. More importantly, we arrive at the somewhat surprising conclusion
that classic tri-training, with some additions, outperforms the state of the
art. We conclude that classic approaches constitute an important and strong
baseline.Comment: ACL 201
- …