262 research outputs found
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction
We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for
developing and evaluating grammatical error correction (GEC). Unlike other
corpora, it represents a broad range of language proficiency levels and uses
holistic fluency edits to not only correct grammatical errors but also make the
original text more native sounding. We describe the types of corrections made
and benchmark four leading GEC systems on this corpus, identifying specific
areas in which they do well and how they can improve. JFLEG fulfills the need
for a new gold standard to properly assess the current state of GEC.Comment: To appear in EACL 2017 (short papers
Evaluation of really good grammatical error correction
Although rarely stated, in practice, Grammatical Error Correction (GEC)
encompasses various models with distinct objectives, ranging from grammatical
error detection to improving fluency. Traditional evaluation methods fail to
fully capture the full range of system capabilities and objectives.
Reference-based evaluations suffer from limitations in capturing the wide
variety of possible correction and the biases introduced during reference
creation and is prone to favor fixing local errors over overall text
improvement. The emergence of large language models (LLMs) has further
highlighted the shortcomings of these evaluation strategies, emphasizing the
need for a paradigm shift in evaluation methodology. In the current study, we
perform a comprehensive evaluation of various GEC systems using a recently
published dataset of Swedish learner texts. The evaluation is performed using
established evaluation metrics as well as human judges. We find that GPT-3 in a
few-shot setting by far outperforms previous grammatical error correction
systems for Swedish, a language comprising only 0.11% of its training data. We
also found that current evaluation methods contain undesirable biases that a
human evaluation is able to reveal. We suggest using human post-editing of GEC
system outputs to analyze the amount of change required to reach native-level
human performance on the task, and provide a dataset annotated with human
post-edits and assessments of grammaticality, fluency and meaning preservation
of GEC system outputs
Why We Need New Evaluation Metrics for NLG
The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly.Comment: accepted to EMNLP 201
- …