Large language models (LLMs) are being increasingly deployed as part of
pipelines that repeatedly process or generate data of some sort. However, a
common barrier to deployment are the frequent and often unpredictable errors
that plague LLMs. Acknowledging the inevitability of these errors, we propose
{\em data quality assertions} to identify when LLMs may be making mistakes. We
present SPADE, a method for automatically synthesizing data quality assertions
that identify bad LLM outputs. We make the observation that developers often
identify data quality issues during prototyping prior to deployment, and
attempt to address them by adding instructions to the LLM prompt over time.
SPADE therefore analyzes histories of prompt versions over time to create
candidate assertion functions and then selects a minimal set that fulfills both
coverage and accuracy requirements. In testing across nine different real-world
LLM pipelines, SPADE efficiently reduces the number of assertions by 14\% and
decreases false failures by 21\% when compared to simpler baselines. SPADE has
been deployed as an offering within LangSmith, LangChain's LLM pipeline hub,
and has been used to generate data quality assertions for over 2000 pipelines
across a spectrum of industries.Comment: 17 pages, 6 figure