516 research outputs found
Towards Learning Terminological Concept Systems from Multilingual Natural Language Text
Terminological Concept Systems (TCS) provide a means of organizing, structuring and representing domain-specific multilingual information and are important to ensure terminological consistency in many tasks, such as translation and cross-border communication. While several approaches to (semi-)automatic term extraction exist, learning their interrelations is vastly underexplored. We propose an automated method to extract terms and relations across natural languages and specialized domains. To this end, we adapt pretrained multilingual neural language models, which we evaluate on term extraction standard datasets with best performing results and a combination of relation extraction standard datasets with competitive results. Code and dataset are publicly available
Recommended from our members
Inductive Bias and Modular Design for Sample-Efficient Neural Language Learning
Most of the world's languages suffer from the paucity of annotated data. This curbs the effectiveness of supervised learning, the most widespread approach to modelling language. Instead, an alternative paradigm could take inspiration from the propensity of children to acquire language from limited stimuli, in order to enable machines to learn any new language from a few examples. The abstract mechanisms underpinning this ability include 1) a set of in-born inductive biases and 2) the deep entrenchment of language in other perceptual and cognitive faculties, combined with the ability to transfer and recombine knowledge across these domains. The main contribution of my thesis is giving concrete form to both these intuitions.
Firstly, I argue that endowing a neural network with the correct inductive biases is equivalent to constructing a prior distribution over its weights and its architecture (including connectivity patterns and non-linear activations). This prior is inferred by "reverse-engineering" a representative set of observed languages and harnessing typological features documented by linguists. Thus, I provide a unified framework for cross-lingual transfer and architecture search by recasting them as hierarchical Bayesian neural models.
Secondly, the skills relevant to different language varieties and different tasks in natural language processing are deeply intertwined. Hence, the neural weights modelling the data for each of their combinations can be imagined as lying in a structured space. I introduce a Bayesian generative model of this space, which is factorised into latent variables representing each language and each task. By virtue of this modular design, predictions can generalise to unseen combinations by extrapolating from the data of observed combinations.
The proposed models are empirically validated on a spectrum of language-related tasks (character-level language modelling, part-of-speech tagging, named entity recognition, and common-sense reasoning) and a typologically diverse sample of about a hundred languages. Compared to a series of competitive baselines, they achieve better performances in new languages in zero-shot and few-shot learning settings. In general, they hold promise to extend state-of-the-art language technology to under-resourced languages by means of sample efficiency and robustness to the cross-lingual variation.ERC (Consolidator Grant 648909) Lexical
Google Research Faculty Award 201
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
We introduce an architecture to learn joint multilingual sentence
representations for 93 languages, belonging to more than 30 different language
families and written in 28 different scripts. Our system uses a single BiLSTM
encoder with a shared BPE vocabulary for all languages, which is coupled with
an auxiliary decoder and trained on publicly available parallel corpora. This
enables us to learn a classifier on top of the resulting sentence embeddings
using English annotated data only, and transfer it to any of the 93 languages
without any modification. Our approach sets a new state-of-the-art on zero-shot
cross-lingual natural language inference for all the 14 languages in the XNLI
dataset but one. We also achieve very competitive results in cross-lingual
document classification (MLDoc dataset). Our sentence embeddings are also
strong at parallel corpus mining, establishing a new state-of-the-art in the
BUCC shared task for 3 of its 4 language pairs. Finally, we introduce a new
test set of aligned sentences in 122 languages based on the Tatoeba corpus, and
show that our sentence embeddings obtain strong results in multilingual
similarity search even for low-resource languages. Our PyTorch implementation,
pre-trained encoder and the multilingual test set will be freely available
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural
language processing (NLP). Yet, what `good generalisation' entails and how it
should be evaluated is not well understood, nor are there any common standards
to evaluate it. In this paper, we aim to lay the ground-work to improve both of
these issues. We present a taxonomy for characterising and understanding
generalisation research in NLP, we use that taxonomy to present a comprehensive
map of published generalisation studies, and we make recommendations for which
areas might deserve attention in the future. Our taxonomy is based on an
extensive literature review of generalisation research, and contains five axes
along which studies can differ: their main motivation, the type of
generalisation they aim to solve, the type of data shift they consider, the
source by which this data shift is obtained, and the locus of the shift within
the modelling pipeline. We use our taxonomy to classify over 400 previous
papers that test generalisation, for a total of more than 600 individual
experiments. Considering the results of this review, we present an in-depth
analysis of the current state of generalisation research in NLP, and make
recommendations for the future. Along with this paper, we release a webpage
where the results of our review can be dynamically explored, and which we
intend to up-date as new NLP generalisation studies are published. With this
work, we aim to make steps towards making state-of-the-art generalisation
testing the new status quo in NLP.Comment: 35 pages of content + 53 pages of reference
SERENGETI: Massively Multilingual Language Models for Africa
Multilingual pretrained language models (mPLMs) acquire valuable,
generalizable linguistic information during pretraining and have advanced the
state of the art on task-specific finetuning. To date, only ~31 out of ~2,000
African languages are covered in existing language models. We ameliorate this
limitation by developing SERENGETI, a massively multilingual language model
that covers 517 African languages and language varieties. We evaluate our novel
models on eight natural language understanding tasks across 20 datasets,
comparing to 4 mPLMs that cover 4-23 African languages. SERENGETI outperforms
other models on 11 datasets across the eights tasks, achieving 82.27 average
F_1. We also perform analyses of errors from our models, which allows us to
investigate the influence of language genealogy and linguistic similarity when
the models are applied under zero-shot settings. We will publicly release our
models for
research.\footnote{\href{https://github.com/UBC-NLP/serengeti}{https://github.com/UBC-NLP/serengeti}}Comment: To appear in Findings of ACL 202
A study of conceptual language similarity: comparison and evaluation
An interesting line of research in natural language processing (NLP) aims to
incorporate linguistic typology to bridge linguistic diversity and assist the
research of low-resource languages. While most works construct linguistic
similarity measures based on lexical or typological features, such as word
order and verbal inflection, recent work has introduced a novel approach to
defining language similarity based on how they represent basic concepts, which
is complementary to existing similarity measures. In this work, we study the
conceptual similarity in detail and evaluate it extensively on a binary
classification task
- …