Despite the tremendous progress in the estimation of generative models, the
development of tools for diagnosing their failures and assessing their
performance has advanced at a much slower pace. Recent developments have
investigated metrics that quantify which parts of the true distribution is
modeled well, and, on the contrary, what the model fails to capture, akin to
precision and recall in information retrieval. In this paper, we present a
general evaluation framework for generative models that measures the trade-off
between precision and recall using R\'enyi divergences. Our framework provides
a novel perspective on existing techniques and extends them to more general
domains. As a key advantage, this formulation encompasses both continuous and
discrete models and allows for the design of efficient algorithms that do not
have to quantize the data. We further analyze the biases of the approximations
used in practice.Comment: Updated to the AISTATS 2020 versio