5,117 research outputs found
Top-N Recommendation on Graphs
Recommender systems play an increasingly important role in online
applications to help users find what they need or prefer. Collaborative
filtering algorithms that generate predictions by analyzing the user-item
rating matrix perform poorly when the matrix is sparse. To alleviate this
problem, this paper proposes a simple recommendation algorithm that fully
exploits the similarity information among users and items and intrinsic
structural information of the user-item matrix. The proposed method constructs
a new representation which preserves affinity and structure information in the
user-item rating matrix and then performs recommendation task. To capture
proximity information about users and items, two graphs are constructed.
Manifold learning idea is used to constrain the new representation to be smooth
on these graphs, so as to enforce users and item proximities. Our model is
formulated as a convex optimization problem, for which we need to solve the
well-known Sylvester equation only. We carry out extensive empirical
evaluations on six benchmark datasets to show the effectiveness of this
approach.Comment: CIKM 201
Kernel Methods for Collaborative Filtering
The goal of the thesis is to extend the kernel methods to matrix factorization(MF) for collaborative ltering(CF). In current literature, MF methods usually assume that the correlated data is distributed on a linear hyperplane, which is not always the case. The best known member of kernel methods is support vector machine (SVM) on linearly non-separable data. In this thesis, we apply kernel methods on MF, embedding the data into a possibly higher dimensional space and conduct factorization in that space. To improve kernelized matrix factorization, we apply multi-kernel learning methods to select optimal kernel functions from the candidates and introduce L2-norm regularization on the weight learning process. In our empirical study, we conduct experiments on three real-world datasets. The results suggest that the proposed method can improve the accuracy of the prediction surpassing state-of-art CF methods
Interaction-aware Factorization Machines for Recommender Systems
Factorization Machine (FM) is a widely used supervised learning approach by
effectively modeling of feature interactions. Despite the successful
application of FM and its many deep learning variants, treating every feature
interaction fairly may degrade the performance. For example, the interactions
of a useless feature may introduce noises; the importance of a feature may also
differ when interacting with different features. In this work, we propose a
novel model named \emph{Interaction-aware Factorization Machine} (IFM) by
introducing Interaction-Aware Mechanism (IAM), which comprises the
\emph{feature aspect} and the \emph{field aspect}, to learn flexible
interactions on two levels. The feature aspect learns feature interaction
importance via an attention network while the field aspect learns the feature
interaction effect as a parametric similarity of the feature interaction vector
and the corresponding field interaction prototype. IFM introduces more
structured control and learns feature interaction importance in a stratified
manner, which allows for more leverage in tweaking the interactions on both
feature-wise and field-wise levels. Besides, we give a more generalized
architecture and propose Interaction-aware Neural Network (INN) and DeepIFM to
capture higher-order interactions. To further improve both the performance and
efficiency of IFM, a sampling scheme is developed to select interactions based
on the field aspect importance. The experimental results from two well-known
datasets show the superiority of the proposed models over the state-of-the-art
methods
Matrix Factorization Techniques for Context-Aware Collaborative Filtering Recommender Systems: A Survey
open access articleCollaborative Filtering Recommender Systems predict user preferences for online information, products or services by learning from past user-item relationships. A predominant approach to Collaborative Filtering is Neighborhood-based, where a user-item preference rating is computed from ratings of similar items and/or users. This approach encounters data sparsity and scalability limitations as the volume of accessible information and the active users continue to grow leading to performance degradation, poor quality recommendations and inaccurate predictions. Despite these drawbacks, the problem of information overload has led to great interests in personalization techniques. The incorporation of context information and Matrix and Tensor Factorization techniques have proved to be a promising solution to some of these challenges. We conducted a focused review of literature in the areas of Context-aware Recommender Systems utilizing Matrix Factorization approaches. This survey paper presents a detailed literature review of Context-aware Recommender Systems and approaches to improving performance for large scale datasets and the impact of incorporating contextual information on the quality and accuracy of the recommendation. The results of this survey can be used as a basic reference for improving and optimizing existing Context-aware Collaborative Filtering based Recommender Systems. The main contribution of this paper is a survey of Matrix Factorization techniques for Context-aware Collaborative Filtering Recommender Systems
A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression
Many machine learning problems can be formulated as predicting labels for a
pair of objects. Problems of that kind are often referred to as pairwise
learning, dyadic prediction or network inference problems. During the last
decade kernel methods have played a dominant role in pairwise learning. They
still obtain a state-of-the-art predictive performance, but a theoretical
analysis of their behavior has been underexplored in the machine learning
literature.
In this work we review and unify existing kernel-based algorithms that are
commonly used in different pairwise learning settings, ranging from matrix
filtering to zero-shot learning. To this end, we focus on closed-form efficient
instantiations of Kronecker kernel ridge regression. We show that independent
task kernel ridge regression, two-step kernel ridge regression and a linear
matrix filter arise naturally as a special case of Kronecker kernel ridge
regression, implying that all these methods implicitly minimize a squared loss.
In addition, we analyze universality, consistency and spectral filtering
properties. Our theoretical results provide valuable insights in assessing the
advantages and limitations of existing pairwise learning methods.Comment: arXiv admin note: text overlap with arXiv:1606.0427
- …