Many interactive data systems combine visual representations of data with
embedded algorithmic support for automation and data exploration. To
effectively support transparent and explainable data systems, it is important
for researchers and designers to know how users understand the system. We
discuss the evaluation of users' mental models of system logic. Mental models
are challenging to capture and analyze. While common evaluation methods aim to
approximate the user's final mental model after a period of system usage, user
understanding continuously evolves as users interact with a system over time.
In this paper, we review many common mental model measurement techniques,
discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation
of mental models when using interactive data analysis and visualization
systems. We present guidelines for evaluating mental models over time that
reveal the evolution of specific model updates and how they may map to the
particular use of interface features and data queries. By asking users to
describe what they know and how they know it, researchers can collect
structured, time-ordered insight into a user's conceptualization process while
also helping guide users to their own discoveries.Comment: 10 pages, submitted to BELIV 2020 Worksho