268 research outputs found
Introduction to the CoNLL-2001 Shared Task: Clause Identification
We describe the CoNLL-2001 shared task: dividing text into clauses. We give
background information on the data sets, present a general overview of the
systems that have taken part in the shared task and briefly discuss their
performance
Alignment-guided chunking
We introduce an adaptable monolingual chunking approachâAlignment-Guided Chunking (AGC)âwhich makes use of knowledge of word alignments acquired from bilingual
corpora. Our approach is motivated by the observation that a sentence should be chunked differently depending
the foreseen end-tasks. For example, given the different
requirements of translation into (say) French and German, it is inappropriate to chunk up an English string in exactly the same way as preparation for translation into one
or other of these languages. We test our chunking approach
on two language pairs: FrenchâEnglish and GermanâEnglish, where these two bilingual corpora share the same English sentences. Two chunkers trained on FrenchâEnglish
(FE-Chunker) and GermanâEnglish(DE-Chunker ) respectively are used to perform chunking on the same English sentences. We construct two test sets, each suitable for Frenchâ
English and GermanâEnglish respectively. The performance of the two chunkers is evaluated on the appropriate test set and with one reference translation only, we report Fscores
of 32.63% for the FE-Chunker and 40.41% for the DE-Chunker
Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition
We describe the CoNLL-2003 shared task: language-independent named entity
recognition. We give background information on the data sets (English and
German) and the evaluation method, present a general overview of the systems
that have taken part in the task and discuss their performance
Viable Dependency Parsing as Sequence Labeling
We recast dependency parsing as a sequence labeling problem, exploring
several encodings of dependency trees as labels. While dependency parsing by
means of sequence labeling had been attempted in existing work, results
suggested that the technique was impractical. We show instead that with a
conventional BiLSTM-based model it is possible to obtain fast and accurate
parsers. These parsers are conceptually simple, not needing traditional parsing
algorithms or auxiliary structures. However, experiments on the PTB and a
sample of UD treebanks show that they provide a good speed-accuracy tradeoff,
with results competitive with more complex approaches.Comment: Camera-ready version to appear at NAACL 2019 (final peer-reviewed
manuscript). 8 pages (incl. appendix
- âŠ