Artificial intelligence (AI) technologies revolutionize vast fields of
society. Humans using these systems are likely to expect them to work in a
potentially hyperrational manner. However, in this study, we show that some AI
systems, namely large language models (LLMs), exhibit behavior that strikingly
resembles human-like intuition - and the many cognitive errors that come with
them. We use a state-of-the-art LLM, namely the latest iteration of OpenAI's
Generative Pre-trained Transformer (GPT-3.5), and probe it with the Cognitive
Reflection Test (CRT) as well as semantic illusions that were originally
designed to investigate intuitive decision-making in humans. Our results show
that GPT-3.5 systematically exhibits "machine intuition," meaning that it
produces incorrect responses that are surprisingly equal to how humans respond
to the CRT as well as to semantic illusions. We investigate several approaches
to test how sturdy GPT-3.5's inclination for intuitive-like decision-making is.
Our study demonstrates that investigating LLMs with methods from cognitive
science has the potential to reveal emergent traits and adjust expectations
regarding their machine behavior