1 research outputs found
AGIBench: A Multi-granularity, Multimodal, Human-referenced, Auto-scoring Benchmark for Large Language Models
Large language models (LLMs) like ChatGPT have revealed amazing intelligence.
How to evaluate the question-solving abilities of LLMs and their degrees of
intelligence is a hot-spot but challenging issue. First, the question-solving
abilities are interlaced with different ability branches like understanding and
massive knowledge categories like mathematics. Second, the inputs of questions
are multimodal that may involve text and images. Third, the response format of
LLMs is diverse and thus poses great challenges for result extraction and
evaluation. In this paper, we propose AGIBench -- a multi-granularity,
multimodal, human-referenced, and auto-scoring benchmarking methodology for
LLMs. Instead of a collection of blended questions, AGIBench focuses on three
typical ability branches and adopts a four-tuple <ability branch, knowledge,
difficulty, modal> to label the attributes of each question. First, it supports
multi-granularity benchmarking, e.g., per-question, per-ability branch,
per-knowledge, per-modal, per-dataset, and per-difficulty level granularities.
Second, it contains multimodal input, including text and images. Third, it
classifies all the questions into five degrees of difficulty according to the
average accuracy rate of abundant educated humans (human-referenced). Fourth,
it adopts zero-shot learning to avoid introducing additional unpredictability
and provides an auto-scoring method to extract and judge the result. Finally,
it defines multi-dimensional metrics, including accuracy under the average,
worst, best, and majority voting cases, and repeatability. AGIBench is
publically available from \url{https://www.benchcouncil.org/agibench}.Comment: 14 page