6,011 research outputs found
One model, two languages: training bilingual parsers with harmonized treebanks
We introduce an approach to train lexicalized parsers using bilingual corpora
obtained by merging harmonized treebanks of different languages, producing
parsers that can analyze sentences in either of the learned languages, or even
sentences that mix both. We test the approach on the Universal Dependency
Treebanks, training with MaltParser and MaltOptimizer. The results show that
these bilingual parsers are more than competitive, as most combinations not
only preserve accuracy, but some even achieve significant improvements over the
corresponding monolingual parsers. Preliminary experiments also show the
approach to be promising on texts with code-switching and when more languages
are added.Comment: 7 pages, 4 tables, 1 figur
Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure
It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%
Cross-Lingual Dependency Parsing for Closely Related Languages - Helsinki's Submission to VarDial 2017
This paper describes the submission from the University of Helsinki to the
shared task on cross-lingual dependency parsing at VarDial 2017. We present
work on annotation projection and treebank translation that gave good results
for all three target languages in the test set. In particular, Slovak seems to
work well with information coming from the Czech treebank, which is in line
with related work. The attachment scores for cross-lingual models even surpass
the fully supervised models trained on the target language treebank. Croatian
is the most difficult language in the test set and the improvements over the
baseline are rather modest. Norwegian works best with information coming from
Swedish whereas Danish contributes surprisingly little
- …