Large Language Models (LLMs) have exhibited remarkable performance on various
Natural Language Processing (NLP) tasks. However, there is a current hot debate
regarding their reasoning capacity. In this paper, we examine the performance
of GPT-3.5, GPT-4, and BARD models, by performing a thorough technical
evaluation on different reasoning tasks across eleven distinct datasets. Our
paper provides empirical evidence showcasing the superior performance of
ChatGPT-4 in comparison to both ChatGPT-3.5 and BARD in zero-shot setting
throughout almost all evaluated tasks. While the superiority of GPT-4 compared
to GPT-3.5 might be explained by its larger size and NLP efficiency, this was
not evident for BARD. We also demonstrate that the three models show limited
proficiency in Inductive, Mathematical, and Multi-hop Reasoning Tasks. To
bolster our findings, we present a detailed and comprehensive analysis of the
results from these three models. Furthermore, we propose a set of engineered
prompts that enhances the zero-shot setting performance of all three models.Comment: Accepted for publication at Elsevier's Natural Language Processing
Journa