Large language models (LLMs) have emerged as a widely-used tool for
information seeking, but their generated outputs are prone to hallucination. In
this work, our aim is to allow LLMs to generate text with citations, improving
their factual correctness and verifiability. Existing work mainly relies on
commercial search engines and human evaluation, making it challenging to
reproduce and compare different modeling approaches. We propose ALCE, the first
benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set
of questions and retrieval corpora and requires building end-to-end systems to
retrieve supporting evidence and generate answers with citations. We develop
automatic metrics along three dimensions -- fluency, correctness, and citation
quality -- and demonstrate their strong correlation with human judgements. Our
experiments with state-of-the-art LLMs and novel prompting strategies show that
current systems have considerable room for improvement -- For example, on the
ELI5 dataset, even the best models lack complete citation support 50% of the
time. Our analyses further highlight promising future directions, including
developing better retrievers, advancing long-context LLMs, and improving the
ability to synthesize information from multiple sources.Comment: Accepted by EMNLP 2023. Code and data are available at
https://github.com/princeton-nlp/ALC