Chain-of-Thought (CoT) prompting has proven to be effective in enhancing the
reasoning capabilities of Large Language Models (LLMs) with at least 100
billion parameters. However, it is ineffective or even detrimental when applied
to reasoning tasks in Smaller Language Models (SLMs) with less than 10 billion
parameters. To address this limitation, we introduce Dialogue-guided
Chain-of-Thought (DialCoT) which employs a dialogue format to generate
intermediate reasoning steps, guiding the model toward the final answer.
Additionally, we optimize the model's reasoning path selection using the
Proximal Policy Optimization (PPO) algorithm, further enhancing its reasoning
capabilities. Our method offers several advantages compared to previous
approaches. Firstly, we transform the process of solving complex reasoning
questions by breaking them down into a series of simpler sub-questions,
significantly reducing the task difficulty and making it more suitable for
SLMs. Secondly, we optimize the model's reasoning path selection through the
PPO algorithm. We conduct comprehensive experiments on four arithmetic
reasoning datasets, demonstrating that our method achieves significant
performance improvements compared to state-of-the-art competitors.Comment: Accepted to EMNLP 202