126 research outputs found
Grammatical error correction using hybrid systems and type filtering
This paper describes our submission to the CoNLL 2014 shared task on grammatical error correction using a hybrid approach, which includes both a rule-based and an SMT system augmented by a large webbased
language model. Furthermore, we demonstrate that correction type estimation can be used to remove unnecessary corrections, improving precision without harming recall. Our best hybrid system achieves state of-the-art results, ranking first on the original test set and second on the test set with alternative annotations.[We would like to thank] Cambridge English Language Assessment, a division of Cambridge Assessment, for supporting this research
Sparse Coding of Neural Word Embeddings for Multilingual Sequence Labeling
In this paper we propose and carefully evaluate a sequence labeling framework
which solely utilizes sparse indicator features derived from dense distributed
word representations. The proposed model obtains (near) state-of-the art
performance for both part-of-speech tagging and named entity recognition for a
variety of languages. Our model relies only on a few thousand sparse
coding-derived features, without applying any modification of the word
representations employed for the different tasks. The proposed model has
favorable generalization properties as it retains over 89.8% of its average POS
tagging accuracy when trained at 1.2% of the total available training data,
i.e.~150 sentences per language
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction
We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for
developing and evaluating grammatical error correction (GEC). Unlike other
corpora, it represents a broad range of language proficiency levels and uses
holistic fluency edits to not only correct grammatical errors but also make the
original text more native sounding. We describe the types of corrections made
and benchmark four leading GEC systems on this corpus, identifying specific
areas in which they do well and how they can improve. JFLEG fulfills the need
for a new gold standard to properly assess the current state of GEC.Comment: To appear in EACL 2017 (short papers
Establishing a New State-of-the-Art for French Named Entity Recognition
The French TreeBank developed at the University Paris 7 is the main source of
morphosyntactic and syntactic annotations for French. However, it does not
include explicit information related to named entities, which are among the
most useful information for several natural language processing tasks and
applications. Moreover, no large-scale French corpus with named entity
annotations contain referential information, which complement the type and the
span of each mention with an indication of the entity it refers to. We have
manually annotated the French TreeBank with such information, after an
automatic pre-annotation step. We sketch the underlying annotation guidelines
and we provide a few figures about the resulting annotations
CamemBERT: a Tasty French Language Model
Pretrained language models are now ubiquitous in Natural Language Processing.
Despite their success, most available models have either been trained on
English data or on the concatenation of data in multiple languages. This makes
practical use of such models --in all languages except English-- very limited.
In this paper, we investigate the feasibility of training monolingual
Transformer-based language models for other languages, taking French as an
example and evaluating our language models on part-of-speech tagging,
dependency parsing, named entity recognition and natural language inference
tasks. We show that the use of web crawled data is preferable to the use of
Wikipedia data. More surprisingly, we show that a relatively small web crawled
dataset (4GB) leads to results that are as good as those obtained using larger
datasets (130+GB). Our best performing model CamemBERT reaches or improves the
state of the art in all four downstream tasks.Comment: ACL 2020 long paper. Web site: https://camembert-model.f
A Full Non-Monotonic Transition System for Unrestricted Non-Projective Parsing
Restricted non-monotonicity has been shown beneficial for the projective
arc-eager dependency parser in previous research, as posterior decisions can
repair mistakes made in previous states due to the lack of information. In this
paper, we propose a novel, fully non-monotonic transition system based on the
non-projective Covington algorithm. As a non-monotonic system requires
exploration of erroneous actions during the training process, we develop
several non-monotonic variants of the recently defined dynamic oracle for the
Covington parser, based on tight approximations of the loss. Experiments on
datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic
dynamic oracle outperforms the monotonic version in the majority of languages.Comment: 11 pages. Accepted for publication at ACL 201
- …