The development of highly fluent large language models (LLMs) has prompted
increased interest in assessing their reasoning and problem-solving
capabilities. We investigate whether several LLMs can solve a classic type of
deductive reasoning problem from the cognitive science literature. The tested
LLMs have limited abilities to solve these problems in their conventional form.
We performed follow up experiments to investigate if changes to the
presentation format and content improve model performance. We do find
performance differences between conditions; however, they do not improve
overall performance. Moreover, we find that performance interacts with
presentation format and content in unexpected ways that differ from human
performance. Overall, our results suggest that LLMs have unique reasoning
biases that are only partially predicted from human reasoning performance.Comment: 7 pages, 7 figures, under revie