Large Language Models (LLMs) have exhibited an impressive ability to perform
in-context learning (ICL) from only a few examples, but the success of ICL
varies widely from task to task. Thus, it is important to quickly determine
whether ICL is applicable to a new task, but directly evaluating ICL accuracy
can be expensive in situations where test data is expensive to annotate -- the
exact situations where ICL is most appealing. In this paper, we propose the
task of ICL accuracy estimation, in which we predict the accuracy of an LLM
when doing in-context learning on a new task given only unlabeled data for that
task. To perform ICL accuracy estimation, we propose a method that trains a
meta-model using LLM confidence scores as features. We compare our method to
several strong accuracy estimation baselines on a new benchmark that covers 4
LLMs and 3 task collections. On average, the meta-model improves over all
baselines and achieves the same estimation performance as directly evaluating
on 40 labeled test examples per task, across the total 12 settings. We
encourage future work to improve on our methods and evaluate on our ICL
accuracy estimation benchmark to deepen our understanding of when ICL works.Comment: 14 pages, 4 figure