Integrating External Tools with Large Language Models (LLMs) to Improve Accuracy

Abstract

This paper deals with improving querying large language models (LLMs). It is well-known that without relevant contextual information, LLMs can provide poor-quality responses or tend to hallucinate. Several initiatives have proposed integrating LLMs with external tools to provide them with up-to-date data to improve accuracy. In this paper, we propose a framework to integrate external tools to enhance the capabilities of LLMs in answering queries in educational settings. Precisely, we develop a framework that allows accessing external APIs to request additional relevant information. Integrated tools can also provide computational capabilities such as calculators or calendars. The proposed framework has been evaluated using datasets from the Multi-Modal Language Understanding (MMLU) collection. The data consists of questions on mathematical and scientific reasoning. Results compared to basic OpenAI model show that the proposed approach significantly improves performance. On mathematical questions, our framework scores 83% where basic OpenAI scores 36%. In scientific reasoning, the difference is even more significant with 88% for the proposed method as compared to 56% for the basic OpenAI model. These promising results open the way to creating complex computing ecosystems around LLMs to make their use more natural to support various tasks and activities.</p

Similar works

Full text

thumbnail-image

Heriot Watt Pure

redirect
Last time updated on 31/07/2025

This paper was published in Heriot Watt Pure.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.