3 research outputs found
Sampler Design for Bayesian Personalized Ranking by Leveraging View Data
Bayesian Personalized Ranking (BPR) is a representative pairwise learning
method for optimizing recommendation models. It is widely known that the
performance of BPR depends largely on the quality of negative sampler. In this
paper, we make two contributions with respect to BPR. First, we find that
sampling negative items from the whole space is unnecessary and may even
degrade the performance. Second, focusing on the purchase feedback of
E-commerce, we propose an effective sampler for BPR by leveraging the
additional view data. In our proposed sampler, users' viewed interactions are
considered as an intermediate feedback between those purchased and unobserved
interactions. The pairwise rankings of user preference among these three types
of interactions are jointly learned, and a user-oriented weighting strategy is
considered during learning process, which is more effective and flexible.
Compared to the vanilla BPR that applies a uniform sampler on all candidates,
our view-enhanced sampler enhances BPR with a relative improvement over 37.03%
and 16.40% on two real-world datasets. Our study demonstrates the importance of
considering users' additional feedback when modeling their preference on
different items, which avoids sampling negative items indiscriminately and
inefficiently.Comment: submitted to IEEE Transactions on Knowledge and Data Engineerin
Addressing Class-Imbalance Problem in Personalized Ranking
Pairwise ranking models have been widely used to address recommendation
problems. The basic idea is to learn the rank of users' preferred items through
separating items into \emph{positive} samples if user-item interactions exist,
and \emph{negative} samples otherwise. Due to the limited number of observable
interactions, pairwise ranking models face serious \emph{class-imbalance}
issues. Our theoretical analysis shows that current sampling-based methods
cause the vertex-level imbalance problem, which makes the norm of learned item
embeddings towards infinite after a certain training iterations, and
consequently results in vanishing gradient and affects the model inference
results. We thus propose an efficient \emph{\underline{Vi}tal
\underline{N}egative \underline{S}ampler} (VINS) to alleviate the
class-imbalance issue for pairwise ranking model, in particular for deep
learning models optimized by gradient methods. The core of VINS is a bias
sampler with reject probability that will tend to accept a negative candidate
with a larger degree weight than the given positive item. Evaluation results on
several real datasets demonstrate that the proposed sampling method speeds up
the training procedure 30\% to 50\% for ranking models ranging from shallow to
deep, while maintaining and even improving the quality of ranking results in
top-N item recommendation.Comment: Preprin
CoSam: An Efficient Collaborative Adaptive Sampler for Recommendation
Sampling strategies have been widely applied in many recommendation systems
to accelerate model learning from implicit feedback data. A typical strategy is
to draw negative instances with uniform distribution, which however will
severely affect model's convergency, stability, and even recommendation
accuracy. A promising solution for this problem is to over-sample the
``difficult'' (a.k.a informative) instances that contribute more on training.
But this will increase the risk of biasing the model and leading to non-optimal
results. Moreover, existing samplers are either heuristic, which require domain
knowledge and often fail to capture real ``difficult'' instances; or rely on a
sampler model that suffers from low efficiency.
To deal with these problems, we propose an efficient and effective
collaborative sampling method CoSam, which consists of: (1) a collaborative
sampler model that explicitly leverages user-item interaction information in
sampling probability and exhibits good properties of normalization, adaption,
interaction information awareness, and sampling efficiency; and (2) an
integrated sampler-recommender framework, leveraging the sampler model in
prediction to offset the bias caused by uneven sampling. Correspondingly, we
derive a fast reinforced training algorithm of our framework to boost the
sampler performance and sampler-recommender collaboration. Extensive
experiments on four real-world datasets demonstrate the superiority of the
proposed collaborative sampler model and integrated sampler-recommender
framework.Comment: 21pages, submitting to TOI