Large Language Models (LLMs) significantly benefit from Chain-of-Thought
(CoT) prompting in performing various reasoning tasks. While CoT allows models
to produce more comprehensive reasoning processes, its emphasis on intermediate
reasoning steps can inadvertently introduce hallucinations and accumulated
errors, thereby limiting models' ability to solve complex reasoning tasks.
Inspired by how humans engage in careful and meticulous deductive logical
reasoning processes to solve tasks, we seek to enable language models to
perform explicit and rigorous deductive reasoning, and also ensure the
trustworthiness of their reasoning process through self-verification. However,
directly verifying the validity of an entire deductive reasoning process is
challenging, even with advanced models like ChatGPT. In light of this, we
propose to decompose a reasoning verification process into a series of
step-by-step subprocesses, each only receiving their necessary context and
premises. To facilitate this procedure, we propose Natural Program, a natural
language-based deductive reasoning format. Our approach enables models to
generate precise reasoning steps where subsequent steps are more rigorously
grounded on prior steps. It also empowers language models to carry out
reasoning self-verification in a step-by-step manner. By integrating this
verification process into each deductive reasoning stage, we significantly
enhance the rigor and trustfulness of generated reasoning steps. Along this
process, we also improve the answer correctness on complex reasoning tasks.
Code will be released at https://github.com/lz1oceani/verify_cot