14 research outputs found
GrammarGPT: Exploring Open-Source LLMs for Native Chinese Grammatical Error Correction with Supervised Fine-Tuning
Grammatical error correction aims to correct ungrammatical sentences
automatically. Recently, some work has demonstrated the excellent capabilities
of closed-source Large Language Models (LLMs, e.g., ChatGPT) in grammatical
error correction. However, the potential of open-source LLMs remains
unexplored. In this paper, we introduced GrammarGPT, an open-source LLM, to
preliminary explore its potential for native Chinese grammatical error
correction. The core recipe of GrammarGPT is to leverage the hybrid dataset of
ChatGPT-generated and human-annotated. For grammatical errors with clues, we
proposed a heuristic method to guide ChatGPT to generate ungrammatical
sentences by providing those clues. For grammatical errors without clues, we
collected ungrammatical sentences from publicly available websites and manually
corrected them. In addition, we employed an error-invariant augmentation method
to enhance the ability of the model to correct native Chinese grammatical
errors. We ultimately constructed about 1k parallel data and utilized these
data to fine-tune open-source LLMs (e.g., Phoenix, released by The Chinese
University of Hong Kong, Shenzhen) with instruction tuning. The experimental
results show that GrammarGPT outperforms the existing SOTA system
significantly. Although model parameters are 20x larger than the SOTA baseline,
the required amount of data for instruction tuning is 1200x smaller,
illustrating the potential of open-source LLMs on native CGEC. Our GrammarGPT
ranks on NLPCC2023 SharedTask1, demonstrating our approach's
effectiveness. The code and data are available at
\url{https://github.com/FreedomIntelligence/GrammarGPT}
An Adversarial Multi-Task Learning Method for Chinese Text Correction with Semantic Detection
Text correction, especially the semantic correction of more widely used
scenes, is strongly required to improve, for the fluency and writing efficiency
of the text. An adversarial multi-task learning method is proposed to enhance
the modeling and detection ability of character polysemy in Chinese sentence
context. Wherein, two models, the masked language model and scoring language
model, are introduced as a pair of not only coupled but also adversarial
learning tasks. Moreover, the Monte Carlo tree search strategy and a policy
network are introduced to accomplish the efficient Chinese text correction task
with semantic detection. The experiments are executed on three datasets and
five comparable methods, and the experimental results show that our method can
obtain good performance in Chinese text correction task for better semantic
rationality.Comment: Published on 31st International Conference on Artificial Neural
Networ
CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME
Chinese Spelling Correction (CSC) is a task to detect and correct spelling
mistakes in texts. In fact, most of Chinese input is based on pinyin input
method, so the study of spelling errors in this process is more practical and
valuable. However, there is still no research dedicated to this essential
scenario. In this paper, we first present a Chinese Spelling Correction Dataset
for errors generated by pinyin IME (CSCD-IME), including 40,000 annotated
sentences from real posts of official media on Sina Weibo. Furthermore, we
propose a novel method to automatically construct large-scale and high-quality
pseudo data by simulating the input through pinyin IME. A series of analyses
and experiments on CSCD-IME show that spelling errors produced by pinyin IME
hold a particular distribution at pinyin level and semantic level and are
challenging enough. Meanwhile, our proposed pseudo-data construction method can
better fit this error distribution and improve the performance of CSC systems.
Finally, we provide a useful guide to using pseudo data, including the data
scale, the data source, and the training strategy
Grammatical Error Correction: A Survey of the State of the Art
Grammatical Error Correction (GEC) is the task of automatically detecting and
correcting errors in text. The task not only includes the correction of
grammatical errors, such as missing prepositions and mismatched subject-verb
agreement, but also orthographic and semantic errors, such as misspellings and
word choice errors respectively. The field has seen significant progress in the
last decade, motivated in part by a series of five shared tasks, which drove
the development of rule-based methods, statistical classifiers, statistical
machine translation, and finally neural machine translation systems which
represent the current dominant state of the art. In this survey paper, we
condense the field into a single article and first outline some of the
linguistic challenges of the task, introduce the most popular datasets that are
available to researchers (for both English and other languages), and summarise
the various methods and techniques that have been developed with a particular
focus on artificial error generation. We next describe the many different
approaches to evaluation as well as concerns surrounding metric reliability,
especially in relation to subjective human judgements, before concluding with
an overview of recent progress and suggestions for future work and remaining
challenges. We hope that this survey will serve as comprehensive resource for
researchers who are new to the field or who want to be kept apprised of recent
developments
From Spelling to Grammar: A New Framework for Chinese Grammatical Error Correction
Chinese Grammatical Error Correction (CGEC) aims to generate a correct
sentence from an erroneous sequence, where different kinds of errors are mixed.
This paper divides the CGEC task into two steps, namely spelling error
correction and grammatical error correction. Specifically, we propose a novel
zero-shot approach for spelling error correction, which is simple but
effective, obtaining a high precision to avoid error accumulation of the
pipeline structure. To handle grammatical error correction, we design
part-of-speech (POS) features and semantic class features to enhance the neural
network model, and propose an auxiliary task to predict the POS sequence of the
target sentence. Our proposed framework achieves a 42.11 F0.5 score on CGEC
dataset without using any synthetic data or data augmentation methods, which
outperforms the previous state-of-the-art by a wide margin of 1.30 points.
Moreover, our model produces meaningful POS representations that capture
different POS words and convey reasonable POS transition rules
Towards standardizing Korean Grammatical Error Correction: Datasets and Annotation
Research on Korean grammatical error correction (GEC) is limited compared to
other major languages such as English and Chinese. We attribute this
problematic circumstance to the lack of a carefully designed evaluation
benchmark for Korean. Thus, in this work, we first collect three datasets from
different sources (Kor-Lang8, Kor-Native, and Kor-Learner) to cover a wide
range of error types and annotate them using our newly proposed tool called
Korean Automatic Grammatical error Annotation System (KAGAS). KAGAS is a
carefully designed edit alignment & classification tool that considers the
nature of Korean on generating an alignment between a source sentence and a
target sentence, and identifies error types on each aligned edit. We also
present baseline models fine-tuned over our datasets. We show that the model
trained with our datasets significantly outperforms the public statistical GEC
system (Hanspell) on a wider range of error types, demonstrating the diversity
and usefulness of the datasets.Comment: Add affiliation and email addres
Recommended from our members
Automatic annotation of error types for grammatical error correction
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting
grammatical errors in text. Although previous work has focused on developing systems that
target specific error types, the current state of the art uses machine translation to correct all error
types simultaneously. A significant disadvantage of this approach is that machine translation
does not produce annotated output and so error type information is lost. This means we can only
evaluate a system in terms of overall performance and cannot carry out a more detailed analysis
of different aspects of system performance.
In this thesis, I develop a system to automatically annotate parallel original and corrected
sentence pairs with explicit edits and error types. In particular, I first extend the Damerau-
Levenshtein alignment algorithm to make use of linguistic information when aligning parallel
sentences, and supplement this alignment with a set of merging rules to handle multi-token
edits. The output from this algorithm surpasses other edit extraction approaches in terms of
approximating human edit annotations and is the current state of the art. Having extracted the
edits, I next classify them according to a new rule-based error type framework that depends only
on automatically obtained linguistic properties of the data, such as part-of-speech tags. This
framework was inspired by existing frameworks, and human judges rated the appropriateness
of the predicted error types as ‘Good’ (85%) or ‘Acceptable’ (10%) in a random sample of 200
edits. The whole system is called the ERRor ANnotation Toolkit (ERRANT) and is the first
toolkit capable of automatically annotating parallel sentences with error types.
I demonstrate the value of ERRANT by applying it to the system output produced by the participants of the CoNLL-2014 shared task, and carry out a detailed error type analysis of
system performance for the first time. I also develop a simple language model based approach
to GEC, that does not require annotated training data, and show how it can be improved using
ERRANT error types