456 research outputs found
Treebank Embedding Vectors for Out-of-domain Dependency Parsing
A recent advance in monolingual dependency parsing is the idea of a treebank
embedding vector, which allows all treebanks for a particular language to be
used as training data while at the same time allowing the model to prefer
training data from one treebank over others and to select the preferred
treebank at test time. We build on this idea by 1) introducing a method to
predict a treebank vector for sentences that do not come from a treebank used
in training, and 2) exploring what happens when we move away from predefined
treebank embedding vectors during test time and instead devise tailored
interpolations. We show that 1) there are interpolated vectors that are
superior to the predefined ones, and 2) treebank vectors can be predicted with
sufficient accuracy, for nine out of ten test languages, to match the
performance of an oracle approach that knows the most suitable predefined
treebank embedding for the test set.Comment: Camera ready for ACL 202
Crossings as a side effect of dependency lengths
The syntactic structure of sentences exhibits a striking regularity:
dependencies tend to not cross when drawn above the sentence. We investigate
two competing explanations. The traditional hypothesis is that this trend
arises from an independent principle of syntax that reduces crossings
practically to zero. An alternative to this view is the hypothesis that
crossings are a side effect of dependency lengths, i.e. sentences with shorter
dependency lengths should tend to have fewer crossings. We are able to reject
the traditional view in the majority of languages considered. The alternative
hypothesis can lead to a more parsimonious theory of language.Comment: the discussion section has been expanded significantly; in press in
Complexity (Wiley
Preparing, restructuring, and augmenting a French treebank: lexicalised parsers or coherent treebanks?
We present the Modified French Treebank (MFT), a completely revamped French Treebank, derived from the Paris 7 Treebank
(P7T), which is cleaner, more coherent, has several transformed structures, and introduces new linguistic analyses. To determine the effect of these changes, we
investigate how theMFT fares in statistical parsing. Probabilistic parsers trained on the MFT training set (currently 3800 trees) already perform better than their counterparts trained on five times the P7T data (18,548 trees), providing an extreme example of the importance of data quality over quantity in statistical parsing. Moreover,
regression analysis on the learning curve of parsers trained on the MFT lead to the prediction that parsers trained on the full projected 18,548 tree MFT training set
will far outscore their counterparts trained on the full P7T. These analyses also show how problematic data can lead to problematic conclusions–in particular, we find that
lexicalisation in the probabilistic parsing of French is probably not as crucial as was once thought (Arun and Keller (2005))
Introduction to the special issue on cross-language algorithms and applications
With the increasingly global nature of our everyday interactions, the need for multilingual technologies to support efficient and efective information access and communication cannot be overemphasized. Computational modeling of language has been the focus of
Natural Language Processing, a subdiscipline of Artificial Intelligence. One of the current challenges for this discipline is to design methodologies and algorithms that are cross-language in order to create multilingual technologies rapidly. The goal of this JAIR special
issue on Cross-Language Algorithms and Applications (CLAA) is to present leading research in this area, with emphasis on developing unifying themes that could lead to the development of the science of multi- and cross-lingualism. In this introduction, we provide the reader with the motivation for this special issue and summarize the contributions of the papers that have been included. The selected papers cover a broad range of cross-lingual technologies including machine translation, domain and language adaptation for sentiment
analysis, cross-language lexical resources, dependency parsing, information retrieval and knowledge representation. We anticipate that this special issue will serve as an invaluable resource for researchers interested in topics of cross-lingual natural language processing.Postprint (published version
- …