24 research outputs found
Understanding and Enhancing the Use of Context for Machine Translation
To understand and infer meaning in language, neural models have to learn
complicated nuances. Discovering distinctive linguistic phenomena from data is
not an easy task. For instance, lexical ambiguity is a fundamental feature of
language which is challenging to learn. Even more prominently, inferring the
meaning of rare and unseen lexical units is difficult with neural networks.
Meaning is often determined from context. With context, languages allow meaning
to be conveyed even when the specific words used are not known by the reader.
To model this learning process, a system has to learn from a few instances in
context and be able to generalize well to unseen cases. The learning process is
hindered when training data is scarce for a task. Even with sufficient data,
learning patterns for the long tail of the lexical distribution is challenging.
In this thesis, we focus on understanding certain potentials of contexts in
neural models and design augmentation models to benefit from them. We focus on
machine translation as an important instance of the more general language
understanding problem. To translate from a source language to a target
language, a neural model has to understand the meaning of constituents in the
provided context and generate constituents with the same meanings in the target
language. This task accentuates the value of capturing nuances of language and
the necessity of generalization from few observations. The main problem we
study in this thesis is what neural machine translation models learn from data
and how we can devise more focused contexts to enhance this learning. Looking
more in-depth into the role of context and the impact of data on learning
models is essential to advance the NLP field. Moreover, it helps highlight the
vulnerabilities of current neural networks and provides insights into designing
more robust models.Comment: PhD dissertation defended on November 10th, 202
Examining the Tip of the Iceberg: A Data Set for Idiom Translation
Neural Machine Translation (NMT) has been widely used in recent years with
significant improvements for many language pairs. Although state-of-the-art NMT
systems are generating progressively better translations, idiom translation
remains one of the open challenges in this field. Idioms, a category of
multiword expressions, are an interesting language phenomenon where the overall
meaning of the expression cannot be composed from the meanings of its parts. A
first important challenge is the lack of dedicated data sets for learning and
evaluating idiom translation. In this paper we address this problem by creating
the first large-scale data set for idiom translation. Our data set is
automatically extracted from a widely used German-English translation corpus
and includes, for each language direction, a targeted evaluation set where all
sentences contain idioms and a regular training corpus where sentences
including idioms are marked. We release this data set and use it to perform
preliminary NMT experiments as the first step towards better idiom translation.Comment: Accepted at LREC 201
Learning Topic-Sensitive Word Representations
Distributed word representations are widely used for modeling words in NLP
tasks. Most of the existing models generate one representation per word and do
not consider different meanings of a word. We present two approaches to learn
multiple topic-sensitive representations per word by using Hierarchical
Dirichlet Process. We observe that by modeling topics and integrating topic
distributions for each document we obtain representations that are able to
distinguish between different meanings of a given word. Our models yield
statistically significant improvements for the lexical substitution task
indicating that commonly used single word representations, even when combined
with contextual information, are insufficient for this task.Comment: 5 pages, 1 figure, Accepted at ACL 201
Data Augmentation for Low-Resource Neural Machine Translation
The quality of a Neural Machine Translation system depends substantially on
the availability of sizable parallel corpora. For low-resource language pairs
this is not the case, resulting in poor translation quality. Inspired by work
in computer vision, we propose a novel data augmentation approach that targets
low-frequency words by generating new sentence pairs containing rare words in
new, synthetically created contexts. Experimental results on simulated
low-resource settings show that our method improves translation quality by up
to 2.9 BLEU points over the baseline and up to 3.2 BLEU over back-translation.Comment: 5 pages, 2 figures, Accepted at ACL 201
Back-Translation Sampling by Targeting Difficult Words in Neural Machine Translation
Neural Machine Translation has achieved state-of-the-art performance for
several language pairs using a combination of parallel and synthetic data.
Synthetic data is often generated by back-translating sentences randomly
sampled from monolingual data using a reverse translation model. While
back-translation has been shown to be very effective in many cases, it is not
entirely clear why. In this work, we explore different aspects of
back-translation, and show that words with high prediction loss during training
benefit most from the addition of synthetic data. We introduce several
variations of sampling strategies targeting difficult-to-predict words using
prediction losses and frequencies of words. In addition, we also target the
contexts of difficult words and sample sentences that are similar in context.
Experimental results for the WMT news translation task show that our method
improves translation quality by up to 1.7 and 1.2 Bleu points over
back-translation using random sampling for German-English and English-German,
respectively.Comment: 11 pages, 2 figures. Accepted at EMNLP 201
Which Prompts Make The Difference? Data Prioritization For Efficient Human LLM Evaluation
Human evaluation is increasingly critical for assessing large language
models, capturing linguistic nuances, and reflecting user preferences more
accurately than traditional automated metrics. However, the resource-intensive
nature of this type of annotation process poses significant challenges. The key
question driving our work: "is it feasible to minimize human-in-the-loop
feedback by prioritizing data instances which most effectively distinguish
between models?" We evaluate several metric-based methods and find that these
metrics enhance the efficiency of human evaluations by minimizing the number of
required annotations, thus saving time and cost, while ensuring a robust
performance evaluation. We show that our method is effective across widely used
model families, reducing instances of indecisive (or "tie") outcomes by up to
54% compared to a random sample when focusing on the top-20 percentile of
prioritized instances. This potential reduction in required human effort
positions our approach as a valuable strategy in future large language model
evaluations.Comment: 37 pages, 8 figure
Elo Uncovered: Robustness and Best Practices in Language Model Evaluation
In Natural Language Processing (NLP), the Elo rating system, originally
designed for ranking players in dynamic games such as chess, is increasingly
being used to evaluate Large Language Models (LLMs) through "A vs B" paired
comparisons. However, while popular, the system's suitability for assessing
entities with constant skill levels, such as LLMs, remains relatively
unexplored. We study two fundamental axioms that evaluation methods should
adhere to: reliability and transitivity. We conduct extensive evaluation of Elo
behaviour, illustrating that individual Elo computations exhibit volatility and
delving into the impact of varying the Elo rating system's hyperparameters. We
show that these axioms are not always satisfied raising questions about the
reliability of current comparative evaluations of LLMs. If the current use of
Elo scores is intended to substitute the costly head-to-head comparison of
LLMs, it is crucial to ensure the ranking is as robust as possible. Guided by
the axioms, our findings offer concrete guidelines for enhancing the
reliability of LLM evaluation methods, suggesting a need for reassessment of
existing comparative approaches.Comment: 22 pages, 7 figures, 2 tables. Revised version of the paper accepted
at GEM Workshop, EMNLP 202
When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale
Large volumes of text data have contributed significantly to the development
of large language models (LLMs) in recent years. This data is typically
acquired by scraping the internet, leading to pretraining datasets comprised of
noisy web text. To date, efforts to prune these datasets down to a higher
quality subset have relied on hand-crafted heuristics encoded as rule-based
filters. In this work, we take a wider view and explore scalable estimates of
data quality that can be used to systematically measure the quality of
pretraining data. We perform a rigorous comparison at scale of the simple data
quality estimator of perplexity, as well as more sophisticated and
computationally intensive estimates of the Error L2-Norm and memorization.
These metrics are used to rank and prune pretraining corpora, and we
subsequently compare LLMs trained on these pruned datasets. Surprisingly, we
find that the simple technique of perplexity outperforms our more
computationally expensive scoring methods. We improve over our no-pruning
baseline while training on as little as 30% of the original training dataset.
Our work sets the foundation for unexplored strategies in automatically
curating high quality corpora and suggests the majority of pretraining data can
be removed while retaining performance.Comment: 14 pages, 8 figure
InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval
Recently, InPars introduced a method to efficiently use large language models
(LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced
to generate relevant queries for documents. These synthetic query-document
pairs can then be used to train a retriever. However, InPars and, more
recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to
generate such datasets. In this work we introduce InPars-v2, a dataset
generator that uses open-source LLMs and existing powerful rerankers to select
synthetic query-document pairs for training. A simple BM25 retrieval pipeline
followed by a monoT5 reranker finetuned on InPars-v2 data achieves new
state-of-the-art results on the BEIR benchmark. To allow researchers to further
improve our method, we open source the code, synthetic data, and finetuned
models: https://github.com/zetaalphavector/inPars/tree/master/tp