5,907 research outputs found
LambdaFM: Learning Optimal Ranking with Factorization Machines Using Lambda Surrogates
State-of-the-art item recommendation algorithms, which apply
Factorization Machines (FM) as a scoring function and
pairwise ranking loss as a trainer (PRFM for short), have
been recently investigated for the implicit feedback based
context-aware recommendation problem (IFCAR). However,
good recommenders particularly emphasize on the accuracy
near the top of the ranked list, and typical pairwise loss functions
might not match well with such a requirement. In this
paper, we demonstrate, both theoretically and empirically,
PRFM models usually lead to non-optimal item recommendation
results due to such a mismatch. Inspired by the success
of LambdaRank, we introduce Lambda Factorization
Machines (LambdaFM), which is particularly intended for
optimizing ranking performance for IFCAR. We also point
out that the original lambda function suffers from the issue
of expensive computational complexity in such settings due
to a large amount of unobserved feedback. Hence, instead
of directly adopting the original lambda strategy, we create
three effective lambda surrogates by conducting a theoretical
analysis for lambda from the top-N optimization perspective.
Further, we prove that the proposed lambda surrogates
are generic and applicable to a large set of pairwise
ranking loss functions. Experimental results demonstrate
LambdaFM significantly outperforms state-of-the-art algorithms
on three real-world datasets in terms of four standard
ranking measures
BoostFM: Boosted Factorization Machines for Top-N Feature-based Recommendation
Feature-based matrix factorization techniques such as Factorization Machines (FM) have been proven to achieve impressive accuracy for the rating prediction task. However, most common recommendation scenarios are formulated as a top-N item ranking problem with implicit feedback (e.g., clicks, purchases)rather than explicit ratings. To address this problem, with both implicit feedback and feature information, we propose a feature-based collaborative boosting recommender called BoostFM, which integrates boosting into factorization models during the process of item ranking. Specifically, BoostFM is an adaptive boosting framework that linearly combines multiple homogeneous component recommenders, which are repeatedly constructed on the basis of the individual FM model by a re-weighting scheme. Two ways are proposed to efficiently train the component recommenders from the perspectives of both pairwise and listwise Learning-to-Rank (L2R). The properties of our proposed method are empirically studied on three real-world datasets. The experimental results show that BoostFM outperforms a number of state-of-the-art approaches for top-N recommendation
NAIS: Neural Attentive Item Similarity Model for Recommendation
Item-to-item collaborative filtering (aka. item-based CF) has been long used
for building recommender systems in industrial settings, owing to its
interpretability and efficiency in real-time personalization. It builds a
user's profile as her historically interacted items, recommending new items
that are similar to the user's profile. As such, the key to an item-based CF
method is in the estimation of item similarities. Early approaches use
statistical measures such as cosine similarity and Pearson coefficient to
estimate item similarities, which are less accurate since they lack tailored
optimization for the recommendation task. In recent years, several works
attempt to learn item similarities from data, by expressing the similarity as
an underlying model and estimating model parameters by optimizing a
recommendation-aware objective function. While extensive efforts have been made
to use shallow linear models for learning item similarities, there has been
relatively less work exploring nonlinear neural network models for item-based
CF.
In this work, we propose a neural network model named Neural Attentive Item
Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an
attention network, which is capable of distinguishing which historical items in
a user profile are more important for a prediction. Compared to the
state-of-the-art item-based CF method Factored Item Similarity Model (FISM),
our NAIS has stronger representation power with only a few additional
parameters brought by the attention network. Extensive experiments on two
public benchmarks demonstrate the effectiveness of NAIS. This work is the first
attempt that designs neural network models for item-based CF, opening up new
research possibilities for future developments of neural recommender systems
Deep Item-based Collaborative Filtering for Top-N Recommendation
Item-based Collaborative Filtering(short for ICF) has been widely adopted in
recommender systems in industry, owing to its strength in user interest
modeling and ease in online personalization. By constructing a user's profile
with the items that the user has consumed, ICF recommends items that are
similar to the user's profile. With the prevalence of machine learning in
recent years, significant processes have been made for ICF by learning item
similarity (or representation) from data. Nevertheless, we argue that most
existing works have only considered linear and shallow relationship between
items, which are insufficient to capture the complicated decision-making
process of users.
In this work, we propose a more expressive ICF solution by accounting for the
nonlinear and higher-order relationship among items. Going beyond modeling only
the second-order interaction (e.g. similarity) between two items, we
additionally consider the interaction among all interacted item pairs by using
nonlinear neural networks. Through this way, we can effectively model the
higher-order relationship among items, capturing more complicated effects in
user decision-making. For example, it can differentiate which historical
itemsets in a user's profile are more important in affecting the user to make a
purchase decision on an item. We treat this solution as a deep variant of ICF,
thus term it as DeepICF. To justify our proposal, we perform empirical studies
on two public datasets from MovieLens and Pinterest. Extensive experiments
verify the highly positive effect of higher-order item interaction modeling
with nonlinear neural networks. Moreover, we demonstrate that by more
fine-grained second-order interaction modeling with attention network, the
performance of our DeepICF method can be further improved.Comment: 25 pages, submitted to TOI
Regularizing Matrix Factorization with User and Item Embeddings for Recommendation
Following recent successes in exploiting both latent factor and word
embedding models in recommendation, we propose a novel Regularized
Multi-Embedding (RME) based recommendation model that simultaneously
encapsulates the following ideas via decomposition: (1) which items a user
likes, (2) which two users co-like the same items, (3) which two items users
often co-liked, and (4) which two items users often co-disliked. In
experimental validation, the RME outperforms competing state-of-the-art models
in both explicit and implicit feedback datasets, significantly improving
Recall@5 by 5.9~7.0%, NDCG@20 by 4.3~5.6%, and MAP@10 by 7.9~8.9%. In addition,
under the cold-start scenario for users with the lowest number of interactions,
against the competing models, the RME outperforms NDCG@5 by 20.2% and 29.4% in
MovieLens-10M and MovieLens-20M datasets, respectively. Our datasets and source
code are available at: https://github.com/thanhdtran/RME.git.Comment: CIKM 201
Fast Matrix Factorization for Online Recommendation with Implicit Feedback
This paper contributes improvements on both the effectiveness and efficiency
of Matrix Factorization (MF) methods for implicit feedback. We highlight two
critical issues of existing works. First, due to the large space of unobserved
feedback, most existing works resort to assign a uniform weight to the missing
data to reduce computational complexity. However, such a uniform assumption is
invalid in real-world settings. Second, most methods are also designed in an
offline setting and fail to keep up with the dynamic nature of online data. We
address the above two issues in learning MF models from implicit feedback. We
first propose to weight the missing data based on item popularity, which is
more effective and flexible than the uniform-weight assumption. However, such a
non-uniform weighting poses efficiency challenge in learning the model. To
address this, we specifically design a new learning algorithm based on the
element-wise Alternating Least Squares (eALS) technique, for efficiently
optimizing a MF model with variably-weighted missing data. We exploit this
efficiency to then seamlessly devise an incremental update strategy that
instantly refreshes a MF model given new feedback. Through comprehensive
experiments on two public datasets in both offline and online protocols, we
show that our eALS method consistently outperforms state-of-the-art implicit MF
methods. Our implementation is available at
https://github.com/hexiangnan/sigir16-eals.Comment: 10 pages, 8 figure
- …