One of the ways Large Language Models (LLMs) are used to perform machine
learning tasks is to provide them with a few examples before asking them to
produce a prediction. This is a meta-learning process known as few-shot
learning. In this paper, we use available Search-Based methods to optimise the
number and combination of examples that can improve an LLM's estimation
performance, when it is used to estimate story points for new agile tasks. Our
preliminary results show that our SBSE technique improves the estimation
performance of the LLM by 59.34% on average (in terms of mean absolute error of
the estimation) over three datasets against a zero-shot setting.Comment: 6 pages, Accepted at SSBSE'23 NIER Trac