14 research outputs found
Contextual Similarity is More Valuable than Character Similarity: Curriculum Learning for Chinese Spell Checking
Chinese Spell Checking (CSC) task aims to detect and correct Chinese spelling
errors. In recent years, related researches focus on introducing the character
similarity from confusion set to enhance the CSC models, ignoring the context
of characters that contain richer information. To make better use of contextual
similarity, we propose a simple yet effective curriculum learning framework for
the CSC task. With the help of our designed model-agnostic framework, existing
CSC models will be trained from easy to difficult as humans learn Chinese
characters and achieve further performance improvements. Extensive experiments
and detailed analyses on widely used SIGHAN datasets show that our method
outperforms previous state-of-the-art methods
A Frustratingly Easy Plug-and-Play Detection-and-Reasoning Module for Chinese Spelling Check
In recent years, Chinese Spelling Check (CSC) has been greatly improved by
designing task-specific pre-training methods or introducing auxiliary tasks,
which mostly solve this task in an end-to-end fashion. In this paper, we
propose to decompose the CSC workflow into detection, reasoning, and searching
subtasks so that the rich external knowledge about the Chinese language can be
leveraged more directly and efficiently. Specifically, we design a
plug-and-play detection-and-reasoning module that is compatible with existing
SOTA non-autoregressive CSC models to further boost their performance. We find
that the detection-and-reasoning module trained for one model can also benefit
other models. We also study the primary interpretability provided by the task
decomposition. Extensive experiments and detailed analyses demonstrate the
effectiveness and competitiveness of the proposed module.Comment: Accepted for publication in Findings of EMNLP 202
Disentangled Phonetic Representation for Chinese Spelling Correction
Chinese Spelling Correction (CSC) aims to detect and correct erroneous
characters in Chinese texts. Although efforts have been made to introduce
phonetic information (Hanyu Pinyin) in this task, they typically merge phonetic
representations with character representations, which tends to weaken the
representation effect of normal texts. In this work, we propose to disentangle
the two types of features to allow for direct interaction between textual and
phonetic information. To learn useful phonetic representations, we introduce a
pinyin-to-character objective to ask the model to predict the correct
characters based solely on phonetic information, where a separation mask is
imposed to disable attention from phonetic input to text. To avoid overfitting
the phonetics, we further design a self-distillation module to ensure that
semantic information plays a major role in the prediction. Extensive
experiments on three CSC benchmarks demonstrate the superiority of our method
in using phonetic information.Comment: Accepted to ACL 2023 Main Conferenc
Rethinking Masked Language Modeling for Chinese Spelling Correction
In this paper, we study Chinese Spelling Correction (CSC) as a joint decision
made by two separate models: a language model and an error model. Through
empirical analysis, we find that fine-tuning BERT tends to over-fit the error
model while under-fit the language model, resulting in poor generalization to
out-of-distribution error patterns. Given that BERT is the backbone of most CSC
models, this phenomenon has a significant negative impact. To address this
issue, we are releasing a multi-domain benchmark LEMON, with higher quality and
diversity than existing benchmarks, to allow a comprehensive assessment of the
open domain generalization of CSC models. Then, we demonstrate that a very
simple strategy, randomly masking 20\% non-error tokens from the input sequence
during fine-tuning is sufficient for learning a much better language model
without sacrificing the error model. This technique can be applied to any model
architecture and achieves new state-of-the-art results on SIGHAN, ECSpell, and
LEMON.Comment: Accepted by ACL'202
BSpell: A CNN-Blended BERT Based Bangla Spell Checker
Bangla typing is mostly performed using English keyboard and can be highly
erroneous due to the presence of compound and similarly pronounced letters.
Spelling correction of a misspelled word requires understanding of word typing
pattern as well as the context of the word usage. A specialized BERT model
named BSpell has been proposed in this paper targeted towards word for word
correction in sentence level. BSpell contains an end-to-end trainable CNN
sub-model named SemanticNet along with specialized auxiliary loss. This allows
BSpell to specialize in highly inflected Bangla vocabulary in the presence of
spelling errors. Furthermore, a hybrid pretraining scheme has been proposed for
BSpell that combines word level and character level masking. Comparison on two
Bangla and one Hindi spelling correction dataset shows the superiority of our
proposed approach. BSpell is available as a Bangla spell checking tool via
GitHub: https://github.com/Hasiburshanto/Bangla-Spell-Checke
Chinese Spelling Correction as Rephrasing Language Model
This paper studies Chinese Spelling Correction (CSC), which aims to detect
and correct potential spelling errors in a given sentence. Current
state-of-the-art methods regard CSC as a sequence tagging task and fine-tune
BERT-based models on sentence pairs. However, we note a critical flaw in the
process of tagging one character to another, that the correction is excessively
conditioned on the error. This is opposite from human mindset, where
individuals rephrase the complete sentence based on its semantics, rather than
solely on the error patterns memorized before. Such a counter-intuitive
learning process results in the bottleneck of generalizability and
transferability of machine spelling correction. To address this, we propose
(ReLM), where the model is trained to rephrase
the entire sentence by infilling additional slots, instead of
character-to-character tagging. This novel training paradigm achieves the new
state-of-the-art results across fine-tuned and zero-shot CSC benchmarks,
outperforming previous counterparts by a large margin. Our method also learns
transferable language representation when CSC is jointly trained with other
tasks
Error-Robust Retrieval for Chinese Spelling Check
Chinese Spelling Check (CSC) aims to detect and correct error tokens in
Chinese contexts, which has a wide range of applications. However, it is
confronted with the challenges of insufficient annotated data and the issue
that previous methods may actually not fully leverage the existing datasets. In
this paper, we introduce our plug-and-play retrieval method with error-robust
information for Chinese Spelling Check (RERIC), which can be directly applied
to existing CSC models. The datastore for retrieval is built completely based
on the training data, with elaborate designs according to the characteristics
of CSC. Specifically, we employ multimodal representations that fuse phonetic,
morphologic, and contextual information in the calculation of query and key
during retrieval to enhance robustness against potential errors. Furthermore,
in order to better judge the retrieved candidates, the n-gram surrounding the
token to be checked is regarded as the value and utilized for specific
reranking. The experiment results on the SIGHAN benchmarks demonstrate that our
proposed method achieves substantial improvements over existing work.Comment: 11 pages, 3 figure