Large language models (LLMs) learn not only natural text generation abilities
but also social biases against different demographic groups from real-world
data. This poses a critical risk when deploying LLM-based applications.
Existing research and resources are not readily applicable in South Korea due
to the differences in language and culture, both of which significantly affect
the biases and targeted demographic groups. This limitation requires localized
social bias datasets to ensure the safe and effective deployment of LLMs. To
this end, we present KO SB I, a new social bias dataset of 34k pairs of
contexts and sentences in Korean covering 72 demographic groups in 15
categories. We find that through filtering-based moderation, social biases in
generated content can be reduced by 16.47%p on average for HyperCLOVA (30B and
82B), and GPT-3.Comment: 17 pages, 8 figures, 12 tables, ACL 202