8 research outputs found
Anchoring and agreement in syntactic annotations
We present a study on two key characteristics of human syntactic annotations:
anchoring and agreement. Anchoring is a well known cognitive bias in human
decision making, where judgments are drawn towards pre-existing values. We
study the influence of anchoring on a standard approach to creation of
syntactic resources where syntactic annotations are obtained via human editing
of tagger and parser output. Our experiments demonstrate a clear anchoring
effect and reveal unwanted consequences, including overestimation of parsing
performance and lower quality of annotations in comparison with human-based
annotations. Using sentences from the Penn Treebank WSJ, we also report
systematically obtained inter-annotator agreement estimates for English
dependency parsing. Our agreement results control for parser bias, and are
consequential in that they are on par with state of the art parsing performance
for English newswire. We discuss the impact of our findings on strategies for
future annotation efforts and parser evaluations.Comment: EMNLP 201
ConStance: Modeling Annotation Contexts to Improve Stance Classification
Manual annotations are a prerequisite for many applications of machine
learning. However, weaknesses in the annotation process itself are easy to
overlook. In particular, scholars often choose what information to give to
annotators without examining these decisions empirically. For subjective tasks
such as sentiment analysis, sarcasm, and stance detection, such choices can
impact results. Here, for the task of political stance detection on Twitter, we
show that providing too little context can result in noisy and uncertain
annotations, whereas providing too strong a context may cause it to outweigh
other signals. To characterize and reduce these biases, we develop ConStance, a
general model for reasoning about annotations across information conditions.
Given conflicting labels produced by multiple annotators seeing the same
instances with different contexts, ConStance simultaneously estimates gold
standard labels and also learns a classifier for new instances. We show that
the classifier learned by ConStance outperforms a variety of baselines at
predicting political stance, while the model's interpretable parameters shed
light on the effects of each context.Comment: To appear at EMNLP 201
Dependency parsing of learner English
Current syntactic annotation of large-scale learner corpora mainly resorts to “standard parsers” trained on native language data. Understanding how these parsers perform on learner data is important for downstream research and application related to learner language. This study evaluates the performance of multiple standard probabilistic parsers on learner English. Our contributions are three-fold. Firstly, we demonstrate that the common practice of constructing a gold standard – by manually correcting the pre-annotation of a single parser – can introduce bias to parser evaluation. We propose an alternative annotation method which can control for the annotation bias. Secondly, we quantify the influence of learner errors on parsing errors, and identify the learner errors that impact on parsing most. Finally, we compare the performance of the parsers on learner English and native English. Our results have useful implications on how to select a standard parser for learner English
New Treebank or Repurposed? On the Feasibility of Cross-Lingual Parsing of Romance Languages with Universal Dependencies
This is the final peer-reviewed manuscript that was accepted for publication in Natural Language Engineering. Changes resulting from the publishing process, such as editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document.[Abstract] This paper addresses the feasibility of cross-lingual parsing with Universal Dependencies (UD) between Romance languages, analyzing its performance when compared to the use of manually annotated resources of the target languages. Several experiments take into account factors such as the lexical distance between the source and target varieties, the impact of delexicalization, the combination of different source treebanks or the adaptation of resources to the target language, among others. The results of these evaluations show that the direct application of a parser from one Romance language to another reaches similar labeled attachment score (LAS) values to those obtained with a manual annotation of about 3,000 tokens in the target language, and unlabeled attachment score (UAS) results equivalent to the use of around 7,000 tokens, depending on the case. These numbers can noticeably increase by performing a focused selection of the source treebanks. Furthermore, the removal of the words in the training corpus (delexicalization) is not useful in most cases of cross-lingual parsing of Romance languages. The lessons learned with the performed experiments were used to build a new UD treebank for Galician, with 1,000 sentences manually corrected after an automatic cross-lingual annotation. Several evaluations in this new resource show that a cross-lingual parser built with the best combination and adaptation of the source treebanks performs better (77 percent LAS and 82 percent UAS) than using more than 16,000 (for LAS results) and more than 20,000 (UAS) manually labeled tokens of Galician.Ministerio de EconomĂa y Competitividad; FJCI-2014-22853Ministerio de EconomĂa y Competitividad; FFI2014-51978-C2-1-RMinisterio de EconomĂa y Competitividad; FFI2014-51978-C2-2-
Anchoring and Agreement in Syntactic Annotations
Published in the Proceedings of EMNLP 2016
We present a study on two key characteristics of human syntactic annotations: anchoring and agreement. Anchoring is a well-known cognitive bias in human decision making, where judgments are drawn towards preexisting values. We study the influence of anchoring on a standard approach to creation of syntactic resources where syntactic annotations are obtained via human editing of tagger and parser output. Our experiments demonstrate a clear anchoring effect and reveal unwanted consequences, including overestimation of parsing performance and lower quality of annotations in comparison with human-based annotations. Using sentences from the Penn Treebank WSJ, we also report systematically obtained inter-annotator agreement estimates for English dependency parsing. Our agreement results control for parser bias, and are consequential in that they are on par with state of the art parsing performance for English newswire. We discuss the impact of our findings on strategies for future annotation efforts and parser evaluations.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216