1 research outputs found
HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models
Large Language Models (LLMs) trained on massive corpora demonstrate
impressive capabilities in a wide range of tasks. While there are ongoing
efforts to adapt these models to languages beyond English, the attention given
to their evaluation methodologies remains limited. Current multilingual
benchmarks often rely on back translations or re-implementations of English
tests, limiting their capacity to capture unique cultural and linguistic
nuances. To bridge this gap for the Korean language, we introduce HAE-RAE
Bench, a dataset curated to challenge models lacking Korean cultural and
contextual depth. The dataset encompasses six downstream tasks across four
domains: vocabulary, history, general knowledge, and reading comprehension.
Contrary to traditional evaluation suites focused on token or sequence
classification and specific mathematical or logical reasoning, HAE-RAE Bench
emphasizes a model's aptitude for recalling Korean-specific knowledge and
cultural contexts. Comparative analysis with prior Korean benchmarks indicates
that the HAE-RAE Bench presents a greater challenge to non-native models, by
disturbing abilities and knowledge learned from English being transferred.Comment: Revised Erro