19,624 research outputs found
Competence-based Curriculum Learning for Neural Machine Translation
Current state-of-the-art NMT systems use large neural networks that are not
only slow to train, but also often require many heuristics and optimization
tricks, such as specialized learning rate schedules and large batch sizes. This
is undesirable as it requires extensive hyperparameter tuning. In this paper,
we propose a curriculum learning framework for NMT that reduces training time,
reduces the need for specialized heuristics or large batch sizes, and results
in overall better performance. Our framework consists of a principled way of
deciding which training samples are shown to the model at different times
during training, based on the estimated difficulty of a sample and the current
competence of the model. Filtering training samples in this manner prevents the
model from getting stuck in bad local optima, making it converge faster and
reach a better solution than the common approach of uniformly sampling training
examples. Furthermore, the proposed method can be easily applied to existing
NMT models by simply modifying their input data pipelines. We show that our
framework can help improve the training time and the performance of both
recurrent neural network models and Transformers, achieving up to a 70%
decrease in training time, while at the same time obtaining accuracy
improvements of up to 2.2 BLEU
Towards a Universal Wordnet by Learning from Combined Evidenc
Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous applications in areas like NLP, IR, and AI. We propose a methodology for the automatic construction of a large-scale multilingual lexical database where words of many languages are hierarchically organized in terms of their meanings and their semantic relations to other words. This resource is bootstrapped from WordNet, a well-known English-language resource. Our approach extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages, drawing on evidence extracted from a variety of resources including existing (monolingual) wordnets, (mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions and statistical learning techniques are used to iteratively integrate this information and build an output graph. Experiments show that this wordnet has a high level of precision and coverage, and that it can be useful in applied tasks such as cross-lingual text classification
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
We study unsupervised multi-document summarization evaluation metrics, which
require neither human-written reference summaries nor human annotations (e.g.
preferences, ratings, etc.). We propose SUPERT, which rates the quality of a
summary by measuring its semantic similarity with a pseudo reference summary,
i.e. selected salient sentences from the source documents, using contextualized
embeddings and soft token alignment techniques. Compared to the
state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with
human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a
neural-based reinforcement learning summarizer, yielding favorable performance
compared to the state-of-the-art unsupervised summarizers. All source code is
available at https://github.com/yg211/acl20-ref-free-eval.Comment: ACL 202
Multilingual Unsupervised Sentence Simplification
Progress in Sentence Simplification has been hindered by the lack of
supervised data, particularly in languages other than English. Previous work
has aligned sentences from original and simplified corpora such as English
Wikipedia and Simple English Wikipedia, but this limits corpus size, domain,
and language. In this work, we propose using unsupervised mining techniques to
automatically create training corpora for simplification in multiple languages
from raw Common Crawl web data. When coupled with a controllable generation
mechanism that can flexibly adjust attributes such as length and lexical
complexity, these mined paraphrase corpora can be used to train simplification
systems in any language. We further incorporate multilingual unsupervised
pretraining methods to create even stronger models and show that by training on
mined data rather than supervised corpora, we outperform the previous best
results. We evaluate our approach on English, French, and Spanish
simplification benchmarks and reach state-of-the-art performance with a totally
unsupervised approach. We will release our models and code to mine the data in
any language included in Common Crawl
Blended Cognition
The central concept of this edited volume is "blended cognition", the natural skill of human beings for combining constantly different heuristics during their several task-solving activities. Something that was sometimes observed like a problem as “bad reasoning”, is now the central key for the understanding of the richness, adaptability and creativity of human cognition. The topic of this book connects in a significant way with the disciplines of psychology, neurology, anthropology, philosophy, logics, engineering, logics, and AI. In a nutshell: understanding better humans for designing better machines. It contains a Preface by the editors and 12 chapters
- …