12,510 research outputs found
Sequential Recommendation with Self-Attentive Multi-Adversarial Network
Recently, deep learning has made significant progress in the task of
sequential recommendation. Existing neural sequential recommenders typically
adopt a generative way trained with Maximum Likelihood Estimation (MLE). When
context information (called factor) is involved, it is difficult to analyze
when and how each individual factor would affect the final recommendation
performance. For this purpose, we take a new perspective and introduce
adversarial learning to sequential recommendation. In this paper, we present a
Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the
effect of context information on sequential recommendation. Specifically, our
proposed MFGAN has two kinds of modules: a Transformer-based generator taking
user behavior sequences as input to recommend the possible next items, and
multiple factor-specific discriminators to evaluate the generated sub-sequence
from the perspectives of different factors. To learn the parameters, we adopt
the classic policy gradient method, and utilize the reward signal of
discriminators for guiding the learning of the generator. Our framework is
flexible to incorporate multiple kinds of factor information, and is able to
trace how each factor contributes to the recommendation decision over time.
Extensive experiments conducted on three real-world datasets demonstrate the
superiority of our proposed model over the state-of-the-art methods, in terms
of effectiveness and interpretability
Recursive Attentive Methods with Reused Item Representations for Sequential Recommendation
Sequential recommendation aims to recommend the next item of users' interest
based on their historical interactions. Recently, the self-attention mechanism
has been adapted for sequential recommendation, and demonstrated
state-of-the-art performance. However, in this manuscript, we show that the
self-attention-based sequential recommendation methods could suffer from the
localization-deficit issue. As a consequence, in these methods, over the first
few blocks, the item representations may quickly diverge from their original
representations, and thus, impairs the learning in the following blocks. To
mitigate this issue, in this manuscript, we develop a recursive attentive
method with reused item representations (RAM) for sequential recommendation. We
compare RAM with five state-of-the-art baseline methods on six public benchmark
datasets. Our experimental results demonstrate that RAM significantly
outperforms the baseline methods on benchmark datasets, with an improvement of
as much as 11.3%. Our stability analysis shows that RAM could enable deeper and
wider models for better performance. Our run-time performance comparison
signifies that RAM could also be more efficient on benchmark datasets
Modeling Sequences as Star Graphs to Address Over-smoothing in Self-attentive Sequential Recommendation
Self-attention (SA) mechanisms have been widely used in developing sequential
recommendation (SR) methods, and demonstrated state-of-the-art performance.
However, in this paper, we show that self-attentive SR methods substantially
suffer from the over-smoothing issue that item embeddings within a sequence
become increasingly similar across attention blocks. As widely demonstrated in
the literature, this issue could lead to a loss of information in individual
items, and significantly degrade models' scalability and performance. To
address the over-smoothing issue, in this paper, we view items within a
sequence constituting a star graph and develop a method, denoted as MSSG, for
SR. Different from existing self-attentive methods, MSSG introduces an
additional internal node to specifically capture the global information within
the sequence, and does not require information propagation among items. This
design fundamentally addresses the over-smoothing issue and enables MSSG a
linear time complexity with respect to the sequence length. We compare MSSG
with ten state-of-the-art baseline methods on six public benchmark datasets.
Our experimental results demonstrate that MSSG significantly outperforms the
baseline methods, with an improvement of as much as 10.10%. Our analysis shows
the superior scalability of MSSG over the state-of-the-art self-attentive
methods. Our complexity analysis and run-time performance comparison together
show that MSSG is both theoretically and practically more efficient than
self-attentive methods. Our analysis of the attention weights learned in
SA-based methods indicates that on sparse recommendation data, modeling
dependencies in all item pairs using the SA mechanism yields limited
information gain, and thus, might not benefit the recommendation performanceComment: arXiv admin note: text overlap with arXiv:2209.0799
Signed Distance-based Deep Memory Recommender
Personalized recommendation algorithms learn a user's preference for an item
by measuring a distance/similarity between them. However, some of the existing
recommendation models (e.g., matrix factorization) assume a linear relationship
between the user and item. This approach limits the capacity of recommender
systems, since the interactions between users and items in real-world
applications are much more complex than the linear relationship. To overcome
this limitation, in this paper, we design and propose a deep learning framework
called Signed Distance-based Deep Memory Recommender, which captures non-linear
relationships between users and items explicitly and implicitly, and work well
in both general recommendation task and shopping basket-based recommendation
task. Through an extensive empirical study on six real-world datasets in the
two recommendation tasks, our proposed approach achieved significant
improvement over ten state-of-the-art recommendation models
- …