Deployed artificial intelligence (AI) often impacts humans, and there is no
one-size-fits-all metric to evaluate these tools. Human-centered evaluation of
AI-based systems combines quantitative and qualitative analysis and human
input. It has been explored to some depth in the explainable AI (XAI) and
human-computer interaction (HCI) communities. Gaps remain, but the basic
understanding that humans interact with AI and accompanying explanations, and
that humans' needs -- complete with their cognitive biases and quirks -- should
be held front and center, is accepted by the community. In this paper, we draw
parallels between the relatively mature field of XAI and the rapidly evolving
research boom around large language models (LLMs). Accepted evaluative metrics
for LLMs are not human-centered. We argue that many of the same paths tread by
the XAI community over the past decade will be retread when discussing LLMs.
Specifically, we argue that humans' tendencies -- again, complete with their
cognitive biases and quirks -- should rest front and center when evaluating
deployed LLMs. We outline three developed focus areas of human-centered
evaluation of XAI: mental models, use case utility, and cognitive engagement,
and we highlight the importance of exploring each of these concepts for LLMs.
Our goal is to jumpstart human-centered LLM evaluation.Comment: Accepted to CHI 2023 workshop on Generative AI and HC