Medical systematic reviews play a vital role in healthcare decision making
and policy. However, their production is time-consuming, limiting the
availability of high-quality and up-to-date evidence summaries. Recent
advancements in large language models (LLMs) offer the potential to
automatically generate literature reviews on demand, addressing this issue.
However, LLMs sometimes generate inaccurate (and potentially misleading) texts
by hallucination or omission. In healthcare, this can make LLMs unusable at
best and dangerous at worst. We conducted 16 interviews with international
systematic review experts to characterize the perceived utility and risks of
LLMs in the specific context of medical evidence reviews. Experts indicated
that LLMs can assist in the writing process by drafting summaries, generating
templates, distilling information, and crosschecking information. They also
raised concerns regarding confidently composed but inaccurate LLM outputs and
other potential downstream harms, including decreased accountability and
proliferation of low-quality reviews. Informed by this qualitative analysis, we
identify criteria for rigorous evaluation of biomedical LLMs aligned with
domain expert views.Comment: 18 pages, 2 figures, 8 tables. Accepted as an EMNLP 2023 main pape