Currently available grammatical error correction (GEC) datasets are compiled
using well-formed written text, limiting the applicability of these datasets to
other domains such as informal writing and dialog. In this paper, we present a
novel parallel GEC dataset drawn from open-domain chatbot conversations; this
dataset is, to our knowledge, the first GEC dataset targeted to a
conversational setting. To demonstrate the utility of the dataset, we use our
annotated data to fine-tune a state-of-the-art GEC model, resulting in a 16
point increase in model precision. This is of particular importance in a GEC
model, as model precision is considered more important than recall in GEC tasks
since false positives could lead to serious confusion in language learners. We
also present a detailed annotation scheme which ranks errors by perceived
impact on comprehensibility, making our dataset both reproducible and
extensible. Experimental results show the effectiveness of our data in
improving GEC model performance in conversational scenario