18 research outputs found
Precision-Recall Curves Using Information Divergence Frontiers
Despite the tremendous progress in the estimation of generative models, the
development of tools for diagnosing their failures and assessing their
performance has advanced at a much slower pace. Recent developments have
investigated metrics that quantify which parts of the true distribution is
modeled well, and, on the contrary, what the model fails to capture, akin to
precision and recall in information retrieval. In this paper, we present a
general evaluation framework for generative models that measures the trade-off
between precision and recall using R\'enyi divergences. Our framework provides
a novel perspective on existing techniques and extends them to more general
domains. As a key advantage, this formulation encompasses both continuous and
discrete models and allows for the design of efficient algorithms that do not
have to quantize the data. We further analyze the biases of the approximations
used in practice.Comment: Updated to the AISTATS 2020 versio
Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective
The top-k operator returns a sparse vector, where the non-zero values
correspond to the k largest values of the input. Unfortunately, because it is a
discontinuous function, it is difficult to incorporate in neural networks
trained end-to-end with backpropagation. Recent works have considered
differentiable relaxations, based either on regularization or perturbation
techniques. However, to date, no approach is fully differentiable and sparse.
In this paper, we propose new differentiable and sparse top-k operators. We
view the top-k operator as a linear program over the permutahedron, the convex
hull of permutations. We then introduce a p-norm regularization term to smooth
out the operator, and show that its computation can be reduced to isotonic
optimization. Our framework is significantly more general than the existing one
and allows for example to express top-k operators that select values in
magnitude. On the algorithmic side, in addition to pool adjacent violator (PAV)
algorithms, we propose a new GPU/TPU-friendly Dykstra algorithm to solve
isotonic optimization problems. We successfully use our operators to prune
weights in neural networks, to fine-tune vision transformers, and as a router
in sparse mixture of experts.Comment: ICML 2023 18 page