439 research outputs found

    A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression

    Full text link
    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction or network inference problems. During the last decade kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify existing kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency and spectral filtering properties. Our theoretical results provide valuable insights in assessing the advantages and limitations of existing pairwise learning methods.Comment: arXiv admin note: text overlap with arXiv:1606.0427

    A Topic Modeling Approach to Ranking

    Full text link
    We propose a topic modeling approach to the prediction of preferences in pairwise comparisons. We develop a new generative model for pairwise comparisons that accounts for multiple shared latent rankings that are prevalent in a population of users. This new model also captures inconsistent user behavior in a natural way. We show how the estimation of latent rankings in the new generative model can be formally reduced to the estimation of topics in a statistically equivalent topic modeling problem. We leverage recent advances in the topic modeling literature to develop an algorithm that can learn shared latent rankings with provable consistency as well as sample and computational complexity guarantees. We demonstrate that the new approach is empirically competitive with the current state-of-the-art approaches in predicting preferences on some semi-synthetic and real world datasets

    Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation

    Full text link
    The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems. Historically, cross-domain co-clustering methods have successfully leveraged shared users and items across dense and sparse domains to improve inference quality. However, they rely on shared rating data and cannot scale to multiple sparse target domains (i.e., the one-to-many transfer setting). This, combined with the increasing adoption of neural recommender architectures, motivates us to develop scalable neural layer-transfer approaches for cross-domain learning. Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains, improving the user and item representations learned in the sparse domains. We leverage contextual invariances across domains to develop these shared modules, and demonstrate that with user-item interaction context, we can learn-to-learn informative representation spaces even with sparse interaction data. We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa, a global payments technology company (19% Item Recall, 3x faster vs. training separate models for each domain). Our approach is applicable to both implicit and explicit feedback settings.Comment: SIGIR 202

    TrustDL: Use of trust-based dictionary learning to facilitate recommendation in social networks

    Get PDF
    peer reviewedCollaborative filtering (CF) is a widely applied method to perform recommendation tasks in a wide range of domains and applications. Dictionary learning (DL) models, which are highly important in CF-based recommender systems (RSs), are well represented by rating matrices. However, these methods alone do not resolve the cold start and data sparsity issues in RSs. We observed a significant improvement in rating results by adding trust information on the social network. For that purpose, we proposed a new dictionary learning technique based on trust information, called TrustDL, where the social network data were employed in the process of recommendation based on structural details on the trusted network. TrustDL sought to integrate the sources of information, including trust statements and ratings, into the recommendation model to mitigate both problems of cold start and data sparsity. It conducted dictionary learning and trust embedding simultaneously to predict unknown rating values. In this paper, the dictionary learning technique was integrated into rating learning, along with the trust consistency regularization term designed to offer a more accurate understanding of the feature representation. Moreover, partially identical trust embedding was developed, where users with similar rating sets could cluster together, and those with similar rating sets could be represented collaboratively. The proposed strategy appears significantly beneficial based on experiments conducted on four frequently used datasets: Epinions, Ciao, FilmTrust, and Flixster
    corecore