38 research outputs found

    Hierarchical Compound Poisson Factorization

    Full text link
    Non-negative matrix factorization models based on a hierarchical Gamma-Poisson structure capture user and item behavior effectively in extremely sparse data sets, making them the ideal choice for collaborative filtering applications. Hierarchical Poisson factorization (HPF) in particular has proved successful for scalable recommendation systems with extreme sparsity. HPF, however, suffers from a tight coupling of sparsity model (absence of a rating) and response model (the value of the rating), which limits the expressiveness of the latter. Here, we introduce hierarchical compound Poisson factorization (HCPF) that has the favorable Gamma-Poisson structure and scalability of HPF to high-dimensional extremely sparse matrices. More importantly, HCPF decouples the sparsity model from the response model, allowing us to choose the most suitable distribution for the response. HCPF can capture binary, non-negative discrete, non-negative continuous, and zero-inflated continuous responses. We compare HCPF with HPF on nine discrete and three continuous data sets and conclude that HCPF captures the relationship between sparsity and response better than HPF.Comment: Will appear on Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 4

    Contributions to probabilistic non-negative matrix factorization - Maximum marginal likelihood estimation and Markovian temporal models

    Get PDF
    Non-negative matrix factorization (NMF) has become a popular dimensionality reductiontechnique, and has found applications in many different fields, such as audio signal processing,hyperspectral imaging, or recommender systems. In its simplest form, NMF aims at finding anapproximation of a non-negative data matrix (i.e., with non-negative entries) as the product of twonon-negative matrices, called the factors. One of these two matrices can be interpreted as adictionary of characteristic patterns of the data, and the other one as activation coefficients ofthese patterns. This low-rank approximation is traditionally retrieved by optimizing a measure of fitbetween the data matrix and its approximation. As it turns out, for many choices of measures of fit,the problem can be shown to be equivalent to the joint maximum likelihood estimation of thefactors under a certain statistical model describing the data. This leads us to an alternativeparadigm for NMF, where the learning task revolves around probabilistic models whoseobservation density is parametrized by the product of non-negative factors. This general framework, coined probabilistic NMF, encompasses many well-known latent variable models ofthe literature, such as models for count data. In this thesis, we consider specific probabilistic NMFmodels in which a prior distribution is assumed on the activation coefficients, but the dictionary remains a deterministic variable. The objective is then to maximize the marginal likelihood in thesesemi-Bayesian NMF models, i.e., the integrated joint likelihood over the activation coefficients.This amounts to learning the dictionary only; the activation coefficients may be inferred in asecond step if necessary. We proceed to study in greater depth the properties of this estimation process. In particular, two scenarios are considered. In the first one, we assume the independence of the activation coefficients sample-wise. Previous experimental work showed that dictionarieslearned with this approach exhibited a tendency to automatically regularize the number of components, a favorable property which was left unexplained. In the second one, we lift thisstandard assumption, and consider instead Markov structures to add statistical correlation to themodel, in order to better analyze temporal data

    Linear inverse problems with nonnegativity constraints through the β\beta-divergences: sparsity of optimisers

    Full text link
    We pass to continuum in optimisation problems associated to linear inverse problems y=Axy = Ax with non-negativity constraint x≥0x \geq 0. We focus on the case where the noise model leads to maximum likelihood estimation through the so-called β\beta-divergences, which cover several of the most common noise statistics such as Gaussian, Poisson and multiplicative Gamma. Considering~xx as a Radon measure over the domain on which the reconstruction is taking place, we show a general sparsity result. In the high noise regime corresponding to y∉{Ax∣x≥0}y \notin \{{Ax}\mid{x \geq 0}\}, optimisers are typically sparse in the form of sums of Dirac measures. We hence provide an explanation as to why any possible algorithm successfully solving the optimisation problem will lead to undesirably spiky-looking images when the image resolution gets finer, a phenomenon well documented in the literature. We illustrate these results with several numerical examples inspired by medical imaging

    Hierarchical Compound Poisson Factorization

    Get PDF
    Abstract Non-negative matrix factorization models based on a hierarchical Gamma-Poisson structure capture user and item behavior effectively in extremely sparse data sets, making them the ideal choice for collaborative filtering applications. Hierarchical Poisson factorization (HPF) in particular has proved successful for scalable recommendation systems with extreme sparsity. HPF, however, suffers from a tight coupling of sparsity model (absence of a rating) and response model (the value of the rating), which limits the expressiveness of the latter. Here, we introduce hierarchical compound Poisson factorization (HCPF) that has the favorable GammaPoisson structure and scalability of HPF to highdimensional extremely sparse matrices. More importantly, HCPF decouples the sparsity model from the response model, allowing us to choose the most suitable distribution for the response. HCPF can capture binary, non-negative discrete, non-negative continuous, and zero-inflated continuous responses. We compare HCPF with HPF on nine discrete and three continuous data sets and conclude that HCPF captures the relationship between sparsity and response better than HPF

    Factorisation bayésienne de matrices pour le filtrage collaboratif

    Get PDF
    Ces quinze dernières années, les systèmes de recommandation ont fait l'objet de nombreuses recherches. L'objectif de ces systèmes est de recommander à chaque utilisateur d'une plateforme des contenus qu'il pourrait apprécier. Cela permet notamment de faciliter la navigation des utilisateurs au sein de très larges catalogues de produits. Les techniques dites de filtrage collaboratif (CF) permettent de faire de telles recommandations à partir des historiques de consommation des utilisateurs uniquement. Ces informations sont habituellement stockées dans des matrices où chaque coefficient correspond au retour d'un utilisateur sur un article. Ces matrices de retour ont la particularité d'être de très grande dimension mais aussi d'être extrêmement creuses puisque les utilisateurs n'ayant interagi qu'avec une petite partie du catalogue. Les retours dits implicites sont les retours d'utilisateurs les plus faciles à collecter. Ils peuvent par exemple prendre la forme de données de comptage, qui correspondent alors au nombre de fois où un utilisateur a interagi avec un article. Les techniques de factorisation en matrices non-négatives (NMF) consistent à approximer cette matrice de retour par le produit de deux matrices non-négatives. Ainsi, chaque utilisateur et chaque article présents dans le système sont représentés par un vecteur non-négatif correspondant respectivement à ses préférences et attributs. Cette approximation, qui correspond à une technique de réduction de dimension, permet alors de faire des recommandations aux utilisateurs. L'objectif de cette thèse est de proposer des méthodes bayésiennes de NMF permettant de modéliser directement les données de comptage sur-dispersées rencontrées en CF. Pour cela, nous étudions d'abord la factorisation Poisson (PF) et présentons ses limites concernant le traitement des données brutes. Pour pallier les problèmes rencontrés par la PF, nous proposons deux extensions de celle-ci : la factorisation binomiale négative (NBF) et la factorisation Poisson composée discrète (dcPF). Ces deux méthodes bayésiennes de NMF proposent des modèles hiérarchiques permettant d'ajouter de la variance. En particulier, la dcPF amène à une interprétation des variables spécialement adaptée à la recommandation musicale. Nous choisissons ensuite de travailler avec des données implicites quantifiées. Cette quantification permet de simplifier la forme des données collectées et d'obtenir des données ordinales. Nous développons donc un modèle de NMF probabiliste adapté aux données ordinales et montrons qu'il peut aussi être vu comme une extension de la PF appliquée à des données pré-traitées. Enfin, le dernier travail de cette thèse traite du problème bien connu de démarrage à froid qui affecte les méthodes de CF. Nous proposons un modèle de co-factorisation de matrices permettant de résoudre ce problème

    Non-Negative Group Sparsity with Subspace Note Modelling for Polyphonic Transcription

    Get PDF
    This work was supported by EPSRC Platform Grant EPSRC EP/K009559/1, EPSRC Grant EP/L027119/1, and EPSRC Grant EP/J010375/1
    corecore