The exceptional performance of pre-trained large language models has
revolutionised various applications, but their adoption in production
environments is hindered by prohibitive costs and inefficiencies, particularly
when utilising long prompts. This paper proposes OverPrompt, an in-context
learning method aimed at improving LLM efficiency and performance by processing
multiple inputs in parallel. Evaluated across diverse datasets, OverPrompt
enhances task efficiency and integrates a diverse range of examples for
improved performance. Particularly, it amplifies fact-checking and sentiment
analysis tasks when supplemented with contextual information. Synthetic data
grouping further enhances performance, suggesting a viable approach for data
augmentation