1 research outputs found
Learning from DPPs via Sampling: Beyond HKPV and symmetry
Determinantal point processes (DPPs) have become a significant tool for
recommendation systems, feature selection, or summary extraction, harnessing
the intrinsic ability of these probabilistic models to facilitate sample
diversity. The ability to sample from DPPs is paramount to the empirical
investigation of these models. Most exact samplers are variants of a spectral
meta-algorithm due to Hough, Krishnapur, Peres and Vir\'ag (henceforth HKPV),
which is in general time and resource intensive. For DPPs with symmetric
kernels, scalable HKPV samplers have been proposed that either first downsample
the ground set of items, or force the kernel to be low-rank, using e.g.
Nystr\"om-type decompositions.
In the present work, we contribute a radically different approach than HKPV.
Exploiting the fact that many statistical and learning objectives can be
effectively accomplished by only sampling certain key observables of a DPP
(so-called linear statistics), we invoke an expression for the Laplace
transform of such an observable as a single determinant, which holds in
complete generality. Combining traditional low-rank approximation techniques
with Laplace inversion algorithms from numerical analysis, we show how to
directly approximate the distribution function of a linear statistic of a DPP.
This distribution function can then be used in hypothesis testing or to
actually sample the linear statistic, as per requirement. Our approach is
scalable and applies to very general DPPs, beyond traditional symmetric
kernels