1,732 research outputs found
A Transformer-based Embedding Model for Personalized Product Search
Product search is an important way for people to browse and purchase items on
E-commerce platforms. While customers tend to make choices based on their
personal tastes and preferences, analysis of commercial product search logs has
shown that personalization does not always improve product search quality. Most
existing product search techniques, however, conduct undifferentiated
personalization across search sessions. They either use a fixed coefficient to
control the influence of personalization or let personalization take effect all
the time with an attention mechanism. The only notable exception is the
recently proposed zero-attention model (ZAM) that can adaptively adjust the
effect of personalization by allowing the query to attend to a zero vector.
Nonetheless, in ZAM, personalization can act at most as equally important as
the query and the representations of items are static across the collection
regardless of the items co-occurring in the user's historical purchases. Aware
of these limitations, we propose a transformer-based embedding model (TEM) for
personalized product search, which could dynamically control the influence of
personalization by encoding the sequence of query and user's purchase history
with a transformer architecture. Personalization could have a dominant impact
when necessary and interactions between items can be taken into consideration
when computing attention weights. Experimental results show that TEM
outperforms state-of-the-art personalization product retrieval models
significantly.Comment: In the proceedings of SIGIR 202
Towards Knowledge-Based Personalized Product Description Generation in E-commerce
Quality product descriptions are critical for providing competitive customer
experience in an e-commerce platform. An accurate and attractive description
not only helps customers make an informed decision but also improves the
likelihood of purchase. However, crafting a successful product description is
tedious and highly time-consuming. Due to its importance, automating the
product description generation has attracted considerable interests from both
research and industrial communities. Existing methods mainly use templates or
statistical methods, and their performance could be rather limited. In this
paper, we explore a new way to generate the personalized product description by
combining the power of neural networks and knowledge base. Specifically, we
propose a KnOwledge Based pErsonalized (or KOBE) product description generation
model in the context of e-commerce. In KOBE, we extend the encoder-decoder
framework, the Transformer, to a sequence modeling formulation using
self-attention. In order to make the description both informative and
personalized, KOBE considers a variety of important factors during text
generation, including product aspects, user categories, and knowledge base,
etc. Experiments on real-world datasets demonstrate that the proposed method
out-performs the baseline on various metrics. KOBE can achieve an improvement
of 9.7% over state-of-the-arts in terms of BLEU. We also present several case
studies as the anecdotal evidence to further prove the effectiveness of the
proposed approach. The framework has been deployed in Taobao, the largest
online e-commerce platform in China.Comment: KDD 2019 Camera-ready. Website:
https://sites.google.com/view/kobe201
Sequential Recommendation with Self-Attentive Multi-Adversarial Network
Recently, deep learning has made significant progress in the task of
sequential recommendation. Existing neural sequential recommenders typically
adopt a generative way trained with Maximum Likelihood Estimation (MLE). When
context information (called factor) is involved, it is difficult to analyze
when and how each individual factor would affect the final recommendation
performance. For this purpose, we take a new perspective and introduce
adversarial learning to sequential recommendation. In this paper, we present a
Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the
effect of context information on sequential recommendation. Specifically, our
proposed MFGAN has two kinds of modules: a Transformer-based generator taking
user behavior sequences as input to recommend the possible next items, and
multiple factor-specific discriminators to evaluate the generated sub-sequence
from the perspectives of different factors. To learn the parameters, we adopt
the classic policy gradient method, and utilize the reward signal of
discriminators for guiding the learning of the generator. Our framework is
flexible to incorporate multiple kinds of factor information, and is able to
trace how each factor contributes to the recommendation decision over time.
Extensive experiments conducted on three real-world datasets demonstrate the
superiority of our proposed model over the state-of-the-art methods, in terms
of effectiveness and interpretability
- …