Large language models have exhibited emergent abilities, demonstrating
exceptional performance across diverse tasks for which they were not explicitly
trained, including those that require complex reasoning abilities. The
emergence of such abilities carries profound implications for the future
direction of research in NLP, especially as the deployment of such models
becomes more prevalent. However, one key challenge is that the evaluation of
these abilities is often confounded by competencies that arise in models
through alternative prompting techniques, such as in-context learning and
instruction following, which also emerge as the models are scaled up. In this
study, we provide the first comprehensive examination of these emergent
abilities while accounting for various potentially biasing factors that can
influence the evaluation of models. We conduct rigorous tests on a set of 18
models, encompassing a parameter range from 60 million to 175 billion
parameters, across a comprehensive set of 22 tasks. Through an extensive series
of over 1,000 experiments, we provide compelling evidence that emergent
abilities can primarily be ascribed to in-context learning. We find no evidence
for the emergence of reasoning abilities, thus providing valuable insights into
the underlying mechanisms driving the observed abilities and thus alleviating
safety concerns regarding their use.Comment: Code available at https://github.com/UKPLab/on-emergence and data
available at https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/393