Large language models (LLMs) with in-context learning have demonstrated
remarkable capability in the text-to-SQL task. Previous research has prompted
LLMs with various demonstration-retrieval strategies and intermediate reasoning
steps to enhance the performance of LLMs. However, those works often employ
varied strategies when constructing the prompt text for text-to-SQL inputs,
such as databases and demonstration examples. This leads to a lack of
comparability in both the prompt constructions and their primary contributions.
Furthermore, selecting an effective prompt construction has emerged as a
persistent problem for future research. To address this limitation, we
comprehensively investigate the impact of prompt constructions across various
settings and provide insights for future work