35,123 research outputs found

    A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization

    Get PDF
    We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators from "users" to the "objects" they rate. Recent low-rank type matrix completion approaches to CF are shown to be special cases. However, unlike existing regularization based CF methods, our approach can be used to also incorporate information such as attributes of the users or the objects -- a limitation of existing regularization based CF methods. We then provide novel representer theorems that we use to develop new estimation methods. We provide learning algorithms based on low-rank decompositions, and test them on a standard CF dataset. The experiments indicate the advantages of generalizing the existing regularization based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach

    Reducing offline evaluation bias of collaborative filtering algorithms

    Get PDF
    Recommendation systems have been integrated into the majority of large online systems to filter and rank information according to user profiles. It thus influences the way users interact with the system and, as a consequence, bias the evaluation of the performance of a recommendation algorithm computed using historical data (via offline evaluation). This paper presents a new application of a weighted offline evaluation to reduce this bias for collaborative filtering algorithms.Comment: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Apr 2015, Bruges, Belgium. pp.137-142, 2015, Proceedings of the 23-th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2015

    Accelerated incremental listwise learning to rank for collaborative filtering

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2017.O enorme volume de informação hoje em dia aumenta a complexidade e degrada a qualidade do processo de tomada de decisão. A fim de melhorar a qualidade das decisões, os sistemas de recomendação têm sido utilizados com resultados consideráveis. Nesse contexto, a filtragem colaborativa desempenha um papel ativo em superar o problema de sobrecarga de informação. Em um cenário em que novas avaliações são recebidas constantemente, um modelo estático torna-se ultrapassado rapidamente, portanto a velocidade de atualização do modelo é um fator crítico. Propomos um método de aprendizagem de ranqueamento incremental acelerado para filtragem colaborativa. Para atingir esse objetivo, aplicamos uma técnica de aceleração a uma abordagem de aprendizado incremental para filtragem colaborativa. Resultados em conjuntos de dados reais confirmam que o algoritmo proposto é mais rápido no processo de aprendizagem mantendo a precisão do modelo.Abstract : The enormous volume of information nowadays increases the complexity of the decision-making process and degrades the quality of decisions. In order to improve the quality of decisions, recommender systems have been applied with significant results. In this context, the collaborative filtering technique plays an active role overcoming the information overload problem. In a scenario where new ratings have been received constantly, a static model becomes outdated quickly, hence the rate of update of the model is a critical factor. We propose an accelerated incremental listwise learning to rank approach for collaborative filtering. To achieve this, we apply an acceleration technique to an incremental collaborative filtering approach. Results on real word datasets show that our proposal accelerates the learning process and keeps the accuracy of the model

    Learning Output Kernels for Multi-Task Problems

    Full text link
    Simultaneously solving multiple related learning tasks is beneficial under a variety of circumstances, but the prior knowledge necessary to correctly model task relationships is rarely available in practice. In this paper, we develop a novel kernel-based multi-task learning technique that automatically reveals structural inter-task relationships. Building over the framework of output kernel learning (OKL), we introduce a method that jointly learns multiple functions and a low-rank multi-task kernel by solving a non-convex regularization problem. Optimization is carried out via a block coordinate descent strategy, where each subproblem is solved using suitable conjugate gradient (CG) type iterative methods for linear operator equations. The effectiveness of the proposed approach is demonstrated on pharmacological and collaborative filtering data
    corecore