Self-correction has achieved impressive results in enhancing the style and
security of the generated output from large language models (LLMs). However,
recent studies suggest that self-correction might be limited or even
counterproductive in reasoning tasks due to LLMs' difficulties in identifying
logical mistakes.
In this paper, we aim to enhance the self-checking capabilities of LLMs by
constructing training data for checking tasks. Specifically, we apply the Chain
of Thought (CoT) methodology to self-checking tasks, utilizing fine-grained
step-level analyses and explanations to assess the correctness of reasoning
paths. We propose a specialized checking format called "Step CoT Check".
Following this format, we construct a checking-correction dataset that includes
detailed step-by-step analysis and checking. Then we fine-tune LLMs to enhance
their error detection and correction abilities.
Our experiments demonstrate that fine-tuning with the "Step CoT Check" format
significantly improves the self-checking and self-correction abilities of LLMs
across multiple benchmarks. This approach outperforms other formats, especially
in locating the incorrect position, with greater benefits observed in larger
models.
For reproducibility, all the datasets and code are provided in
https://github.com/bammt/Learn-to-check