501 research outputs found

    Approximate Inference in Continuous Determinantal Point Processes

    Full text link
    Determinantal point processes (DPPs) are random point processes well-suited for modeling repulsion. In machine learning, the focus of DPP-based models has been on diverse subset selection from a discrete and finite base set. This discrete setting admits an efficient sampling algorithm based on the eigendecomposition of the defining kernel matrix. Recently, there has been growing interest in using DPPs defined on continuous spaces. While the discrete-DPP sampler extends formally to the continuous case, computationally, the steps required are not tractable in general. In this paper, we present two efficient DPP sampling schemes that apply to a wide range of kernel functions: one based on low rank approximations via Nystrom and random Fourier feature techniques and another based on Gibbs sampling. We demonstrate the utility of continuous DPPs in repulsive mixture modeling and synthesizing human poses spanning activity spaces

    Inference for determinantal point processes without spectral knowledge

    Full text link
    Determinantal point processes (DPPs) are point process models that naturally encode diversity between the points of a given realization, through a positive definite kernel KK. DPPs possess desirable properties, such as exact sampling or analyticity of the moments, but learning the parameters of kernel KK through likelihood-based inference is not straightforward. First, the kernel that appears in the likelihood is not KK, but another kernel LL related to KK through an often intractable spectral decomposition. This issue is typically bypassed in machine learning by directly parametrizing the kernel LL, at the price of some interpretability of the model parameters. We follow this approach here. Second, the likelihood has an intractable normalizing constant, which takes the form of a large determinant in the case of a DPP over a finite set of objects, and the form of a Fredholm determinant in the case of a DPP over a continuous domain. Our main contribution is to derive bounds on the likelihood of a DPP, both for finite and continuous domains. Unlike previous work, our bounds are cheap to evaluate since they do not rely on approximating the spectrum of a large matrix or an operator. Through usual arguments, these bounds thus yield cheap variational inference and moderately expensive exact Markov chain Monte Carlo inference methods for DPPs

    Learning, Large Scale Inference, and Temporal Modeling of Determinantal Point Processes

    Get PDF
    Determinantal Point Processes (DPPs) are random point processes well-suited for modelling repulsion. In discrete settings, DPPs are a natural model for subset selection problems where diversity is desired. For example, they can be used to select relevant but diverse sets of text or image search results. Among many remarkable properties, they offer tractable algorithms for exact inference, including computing marginals, computing certain conditional probabilities, and sampling. In this thesis, we provide four main contributions that enable DPPs to be used in more general settings. First, we develop algorithms to sample from approximate discrete DPPs in settings where we need to select a diverse subset from a large amount of items. Second, we extend this idea to continuous spaces where we develop approximate algorithms to sample from continuous DPPs, yielding a method to select point configurations that tend to be overly-dispersed. Our third contribution is in developing robust algorithms to learn the parameters of the DPP kernels, which is previously thought to be a difficult, open problem. Finally, we develop a temporal extension for discrete DPPs, where we model sequences of subsets that are not only marginally diverse but also diverse across time

    Bayesian Inference for Latent Biologic Structure with Determinantal Point Processes (DPP)

    Full text link
    We discuss the use of the determinantal point process (DPP) as a prior for latent structure in biomedical applications, where inference often centers on the interpretation of latent features as biologically or clinically meaningful structure. Typical examples include mixture models, when the terms of the mixture are meant to represent clinically meaningful subpopulations (of patients, genes, etc.). Another class of examples are feature allocation models. We propose the DPP prior as a repulsive prior on latent mixture components in the first example, and as prior on feature-specific parameters in the second case. We argue that the DPP is in general an attractive prior model for latent structure when biologically relevant interpretation of such structure is desired. We illustrate the advantages of DPP prior in three case studies, including inference in mixture models for magnetic resonance images (MRI) and for protein expression, and a feature allocation model for gene expression using data from The Cancer Genome Atlas. An important part of our argument are efficient and straightforward posterior simulation methods. We implement a variation of reversible jump Markov chain Monte Carlo simulation for inference under the DPP prior, using a density with respect to the unit rate Poisson process
    • …
    corecore