Semi-structured data, such as Infobox tables, often include temporal
information about entities, either implicitly or explicitly. Can current NLP
systems reason about such information in semi-structured tables? To tackle this
question, we introduce the task of temporal question answering on
semi-structured tables. We present a dataset, TempTabQA, which comprises 11,454
question-answer pairs extracted from 1,208 Wikipedia Infobox tables spanning
more than 90 distinct domains. Using this dataset, we evaluate several
state-of-the-art models for temporal reasoning. We observe that even the
top-performing LLMs lag behind human performance by more than 13.5 F1 points.
Given these results, our dataset has the potential to serve as a challenging
benchmark to improve the temporal reasoning capabilities of NLP models.Comment: EMNLP 2023(Main), 23 Figures, 32 Table