124 research outputs found
Deep Clustering of Text Representations for Supervision-free Probing of Syntax
We explore deep clustering of text representations for unsupervised model
interpretation and induction of syntax. As these representations are
high-dimensional, out-of-the-box methods like KMeans do not work well. Thus,
our approach jointly transforms the representations into a lower-dimensional
cluster-friendly space and clusters them. We consider two notions of syntax:
Part of speech Induction (POSI) and constituency labelling (CoLab) in this
work. Interestingly, we find that Multilingual BERT (mBERT) contains surprising
amount of syntactic knowledge of English; possibly even as much as English BERT
(EBERT). Our model can be used as a supervision-free probe which is arguably a
less-biased way of probing. We find that unsupervised probes show benefits from
higher layers as compared to supervised probes. We further note that our
unsupervised probe utilizes EBERT and mBERT representations differently,
especially for POSI. We validate the efficacy of our probe by demonstrating its
capabilities as an unsupervised syntax induction technique. Our probe works
well for both syntactic formalisms by simply adapting the input
representations. We report competitive performance of our probe on 45-tag
English POSI, state-of-the-art performance on 12-tag POSI across 10 languages,
and competitive results on CoLab. We also perform zero-shot syntax induction on
resource impoverished languages and report strong results
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT
Pretrained contextual representation models (Peters et al., 2018; Devlin et
al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new
release of BERT (Devlin, 2018) includes a model simultaneously pretrained on
104 languages with impressive performance for zero-shot cross-lingual transfer
on a natural language inference task. This paper explores the broader
cross-lingual potential of mBERT (multilingual) as a zero shot language
transfer model on 5 NLP tasks covering a total of 39 languages from various
language families: NLI, document classification, NER, POS tagging, and
dependency parsing. We compare mBERT with the best-published methods for
zero-shot cross-lingual transfer and find mBERT competitive on each task.
Additionally, we investigate the most effective strategy for utilizing mBERT in
this manner, determine to what extent mBERT generalizes away from language
specific features, and measure factors that influence cross-lingual transfer.Comment: EMNLP 2019 Camera Read
Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing
We introduce a novel method for multilingual transfer that utilizes deep
contextual embeddings, pretrained in an unsupervised fashion. While contextual
embeddings have been shown to yield richer representations of meaning compared
to their static counterparts, aligning them poses a challenge due to their
dynamic nature. To this end, we construct context-independent variants of the
original monolingual spaces and utilize their mapping to derive an alignment
for the context-dependent spaces. This mapping readily supports processing of a
target language, improving transfer by context-aware embeddings. Our
experimental results demonstrate the effectiveness of this approach for
zero-shot and few-shot learning of dependency parsing. Specifically, our method
consistently outperforms the previous state-of-the-art on 6 tested languages,
yielding an improvement of 6.8 LAS points on average.Comment: NAACL 201
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus
Many efforts of research are devoted to semantic role labeling (SRL) which is
crucial for natural language understanding. Supervised approaches have achieved
impressing performances when large-scale corpora are available for
resource-rich languages such as English. While for the low-resource languages
with no annotated SRL dataset, it is still challenging to obtain competitive
performances. Cross-lingual SRL is one promising way to address the problem,
which has achieved great advances with the help of model transferring and
annotation projection. In this paper, we propose a novel alternative based on
corpus translation, constructing high-quality training datasets for the target
languages from the source gold-standard SRL annotations. Experimental results
on Universal Proposition Bank show that the translation-based method is highly
effective, and the automatic pseudo datasets can improve the target-language
SRL performances significantly.Comment: Accepted at ACL 202
- …