We show that large language models (LLMs) are remarkably good at working with
interpretable models that decompose complex outcomes into univariate
graph-represented components. By adopting a hierarchical approach to reasoning,
LLMs can provide comprehensive model-level summaries without ever requiring the
entire model to fit in context. This approach enables LLMs to apply their
extensive background knowledge to automate common tasks in data science such as
detecting anomalies that contradict prior knowledge, describing potential
reasons for the anomalies, and suggesting repairs that would remove the
anomalies. We use multiple examples in healthcare to demonstrate the utility of
these new capabilities of LLMs, with particular emphasis on Generalized
Additive Models (GAMs). Finally, we present the package TalkToEBM as
an open-source LLM-GAM interface