Large language models primarily rely on incontext learning to execute tasks.
We introduce EchoPrompt, a simple yet effective approach to prompt the model to
rephrase its queries before answering them. EchoPrompt is inspired by
self-questioning, a cognitive strategy humans use to vocalize queries before
providing answers, thereby reducing misconceptions. Experimental results
demonstrate that EchoPrompt leads to substantial improvements in both zero-shot
and few-shot in-context learning with standard and chain-of-thought prompting
on four families of causal language models. These improvements are observed
across various numerical reasoning (GSM8K, SVAMP, MultiArith, SingleOp),
reading comprehension (DROP, SQuAD), and logical reasoning (Shuffled Objects,
Date Understanding, Coin Flipping) tasks. On average, EchoPrompt improves the
Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13%
in reading comprehension tasks. We investigate the effectiveness of EchoPrompt
through ablation studies, which reveal the significance of both original and
rephrased queries for EchoPrompt's efficacy. Our empirical results show that
EchoPrompt is an effective technique that can easily augment in-context
learning for better performance