Large language models (LLMs) have been shown to perform well at a variety of
syntactic, discourse, and reasoning tasks. While LLMs are increasingly deployed
in many forms including conversational agents that interact with humans, we
lack a grounded benchmark to measure how well LLMs understand \textit{social}
language. Here, we introduce a new theory-driven benchmark, SocKET, that
contains 58 NLP tasks testing social knowledge which we group into five
categories: humor & sarcasm, offensiveness, sentiment & emotion, and
trustworthiness. In tests on the benchmark, we demonstrate that current models
attain only moderate performance but reveal significant potential for task
transfer among different types and categories of tasks, which were predicted
from theory. Through zero-shot evaluations, we show that pretrained models
already possess some innate but limited capabilities of social language
understanding and training on one category of tasks can improve zero-shot
testing on others. Our benchmark provides a systematic way to analyze model
performance on an important dimension of language and points to clear room for
improvement to build more socially-aware LLMs. The associated resources are
released at https://github.com/minjechoi/SOCKET.Comment: 24 pages, 7 tables, 5 figure