1,930 research outputs found
LambdaFM: Learning Optimal Ranking with Factorization Machines Using Lambda Surrogates
State-of-the-art item recommendation algorithms, which apply
Factorization Machines (FM) as a scoring function and
pairwise ranking loss as a trainer (PRFM for short), have
been recently investigated for the implicit feedback based
context-aware recommendation problem (IFCAR). However,
good recommenders particularly emphasize on the accuracy
near the top of the ranked list, and typical pairwise loss functions
might not match well with such a requirement. In this
paper, we demonstrate, both theoretically and empirically,
PRFM models usually lead to non-optimal item recommendation
results due to such a mismatch. Inspired by the success
of LambdaRank, we introduce Lambda Factorization
Machines (LambdaFM), which is particularly intended for
optimizing ranking performance for IFCAR. We also point
out that the original lambda function suffers from the issue
of expensive computational complexity in such settings due
to a large amount of unobserved feedback. Hence, instead
of directly adopting the original lambda strategy, we create
three effective lambda surrogates by conducting a theoretical
analysis for lambda from the top-N optimization perspective.
Further, we prove that the proposed lambda surrogates
are generic and applicable to a large set of pairwise
ranking loss functions. Experimental results demonstrate
LambdaFM significantly outperforms state-of-the-art algorithms
on three real-world datasets in terms of four standard
ranking measures
BoostFM: Boosted Factorization Machines for Top-N Feature-based Recommendation
Feature-based matrix factorization techniques such as Factorization Machines (FM) have been proven to achieve impressive accuracy for the rating prediction task. However, most common recommendation scenarios are formulated as a top-N item ranking problem with implicit feedback (e.g., clicks, purchases)rather than explicit ratings. To address this problem, with both implicit feedback and feature information, we propose a feature-based collaborative boosting recommender called BoostFM, which integrates boosting into factorization models during the process of item ranking. Specifically, BoostFM is an adaptive boosting framework that linearly combines multiple homogeneous component recommenders, which are repeatedly constructed on the basis of the individual FM model by a re-weighting scheme. Two ways are proposed to efficiently train the component recommenders from the perspectives of both pairwise and listwise Learning-to-Rank (L2R). The properties of our proposed method are empirically studied on three real-world datasets. The experimental results show that BoostFM outperforms a number of state-of-the-art approaches for top-N recommendation
Discrete Factorization Machines for Fast Feature-based Recommendation
User and item features of side information are crucial for accurate
recommendation. However, the large number of feature dimensions, e.g., usually
larger than 10^7, results in expensive storage and computational cost. This
prohibits fast recommendation especially on mobile applications where the
computational resource is very limited. In this paper, we develop a generic
feature-based recommendation model, called Discrete Factorization Machine
(DFM), for fast and accurate recommendation. DFM binarizes the real-valued
model parameters (e.g., float32) of every feature embedding into binary codes
(e.g., boolean), and thus supports efficient storage and fast user-item score
computation. To avoid the severe quantization loss of the binarization, we
propose a convergent updating rule that resolves the challenging discrete
optimization of DFM. Through extensive experiments on two real-world datasets,
we show that 1) DFM consistently outperforms state-of-the-art binarized
recommendation models, and 2) DFM shows very competitive performance compared
to its real-valued version (FM), demonstrating the minimized quantization loss.
This work is accepted by IJCAI 2018.Comment: Appeared in IJCAI 201
Fast Matrix Factorization for Online Recommendation with Implicit Feedback
This paper contributes improvements on both the effectiveness and efficiency
of Matrix Factorization (MF) methods for implicit feedback. We highlight two
critical issues of existing works. First, due to the large space of unobserved
feedback, most existing works resort to assign a uniform weight to the missing
data to reduce computational complexity. However, such a uniform assumption is
invalid in real-world settings. Second, most methods are also designed in an
offline setting and fail to keep up with the dynamic nature of online data. We
address the above two issues in learning MF models from implicit feedback. We
first propose to weight the missing data based on item popularity, which is
more effective and flexible than the uniform-weight assumption. However, such a
non-uniform weighting poses efficiency challenge in learning the model. To
address this, we specifically design a new learning algorithm based on the
element-wise Alternating Least Squares (eALS) technique, for efficiently
optimizing a MF model with variably-weighted missing data. We exploit this
efficiency to then seamlessly devise an incremental update strategy that
instantly refreshes a MF model given new feedback. Through comprehensive
experiments on two public datasets in both offline and online protocols, we
show that our eALS method consistently outperforms state-of-the-art implicit MF
methods. Our implementation is available at
https://github.com/hexiangnan/sigir16-eals.Comment: 10 pages, 8 figure
Neural Collaborative Filtering
In recent years, deep neural networks have yielded immense success on speech
recognition, computer vision and natural language processing. However, the
exploration of deep neural networks on recommender systems has received
relatively less scrutiny. In this work, we strive to develop techniques based
on neural networks to tackle the key problem in recommendation -- collaborative
filtering -- on the basis of implicit feedback. Although some recent work has
employed deep learning for recommendation, they primarily used it to model
auxiliary information, such as textual descriptions of items and acoustic
features of musics. When it comes to model the key factor in collaborative
filtering -- the interaction between user and item features, they still
resorted to matrix factorization and applied an inner product on the latent
features of users and items. By replacing the inner product with a neural
architecture that can learn an arbitrary function from data, we present a
general framework named NCF, short for Neural network-based Collaborative
Filtering. NCF is generic and can express and generalize matrix factorization
under its framework. To supercharge NCF modelling with non-linearities, we
propose to leverage a multi-layer perceptron to learn the user-item interaction
function. Extensive experiments on two real-world datasets show significant
improvements of our proposed NCF framework over the state-of-the-art methods.
Empirical evidence shows that using deeper layers of neural networks offers
better recommendation performance.Comment: 10 pages, 7 figure
- …