12,099 research outputs found
Product Knowledge Graph Embedding for E-commerce
In this paper, we propose a new product knowledge graph (PKG) embedding
approach for learning the intrinsic product relations as product knowledge for
e-commerce. We define the key entities and summarize the pivotal product
relations that are critical for general e-commerce applications including
marketing, advertisement, search ranking and recommendation. We first provide a
comprehensive comparison between PKG and ordinary knowledge graph (KG) and then
illustrate why KG embedding methods are not suitable for PKG learning. We
construct a self-attention-enhanced distributed representation learning model
for learning PKG embeddings from raw customer activity data in an end-to-end
fashion. We design an effective multi-task learning schema to fully leverage
the multi-modal e-commerce data. The Poincare embedding is also employed to
handle complex entity structures. We use a real-world dataset from
grocery.walmart.com to evaluate the performances on knowledge completion,
search ranking and recommendation. The proposed approach compares favourably to
baselines in knowledge completion and downstream tasks
Knowledge-aware Complementary Product Representation Learning
Learning product representations that reflect complementary relationship
plays a central role in e-commerce recommender system. In the absence of the
product relationships graph, which existing methods rely on, there is a need to
detect the complementary relationships directly from noisy and sparse customer
purchase activities. Furthermore, unlike simple relationships such as
similarity, complementariness is asymmetric and non-transitive. Standard usage
of representation learning emphasizes on only one set of embedding, which is
problematic for modelling such properties of complementariness. We propose
using knowledge-aware learning with dual product embedding to solve the above
challenges. We encode contextual knowledge into product representation by
multi-task learning, to alleviate the sparsity issue. By explicitly modelling
with user bias terms, we separate the noise of customer-specific preferences
from the complementariness. Furthermore, we adopt the dual embedding framework
to capture the intrinsic properties of complementariness and provide geometric
interpretation motivated by the classic separating hyperplane theory. Finally,
we propose a Bayesian network structure that unifies all the components, which
also concludes several popular models as special cases. The proposed method
compares favourably to state-of-art methods, in downstream classification and
recommendation tasks. We also develop an implementation that scales efficiently
to a dataset with millions of items and customers
Learning over Knowledge-Base Embeddings for Recommendation
State-of-the-art recommendation algorithms -- especially the collaborative
filtering (CF) based approaches with shallow or deep models -- usually work
with various unstructured information sources for recommendation, such as
textual reviews, visual images, and various implicit or explicit feedbacks.
Though structured knowledge bases were considered in content-based approaches,
they have been largely neglected recently due to the availability of vast
amount of data, and the learning power of many complex models.
However, structured knowledge bases exhibit unique advantages in personalized
recommendation systems. When the explicit knowledge about users and items is
considered for recommendation, the system could provide highly customized
recommendations based on users' historical behaviors. A great challenge for
using knowledge bases for recommendation is how to integrated large-scale
structured and unstructured data, while taking advantage of collaborative
filtering for highly accurate performance. Recent achievements on knowledge
base embedding sheds light on this problem, which makes it possible to learn
user and item representations while preserving the structure of their
relationship with external knowledge. In this work, we propose to reason over
knowledge base embeddings for personalized recommendation. Specifically, we
propose a knowledge base representation learning approach to embed
heterogeneous entities for recommendation. Experimental results on real-world
dataset verified the superior performance of our approach compared with
state-of-the-art baselines
Multi-modal Embedding Fusion-based Recommender
Recommendation systems have lately been popularized globally, with primary
use cases in online interaction systems, with significant focus on e-commerce
platforms. We have developed a machine learning-based recommendation platform,
which can be easily applied to almost any items and/or actions domain. Contrary
to existing recommendation systems, our platform supports multiple types of
interaction data with multiple modalities of metadata natively. This is
achieved through multi-modal fusion of various data representations. We
deployed the platform into multiple e-commerce stores of different kinds, e.g.
food and beverages, shoes, fashion items, telecom operators. Here, we present
our system, its flexibility and performance. We also show benchmark results on
open datasets, that significantly outperform state-of-the-art prior work.Comment: 7 pages, 8 figure
TransNFCM: Translation-Based Neural Fashion Compatibility Modeling
Identifying mix-and-match relationships between fashion items is an urgent
task in a fashion e-commerce recommender system. It will significantly enhance
user experience and satisfaction. However, due to the challenges of inferring
the rich yet complicated set of compatibility patterns in a large e-commerce
corpus of fashion items, this task is still underexplored. Inspired by the
recent advances in multi-relational knowledge representation learning and deep
neural networks, this paper proposes a novel Translation-based Neural Fashion
Compatibility Modeling (TransNFCM) framework, which jointly optimizes fashion
item embeddings and category-specific complementary relations in a unified
space via an end-to-end learning manner. TransNFCM places items in a unified
embedding space where a category-specific relation (category-comp-category) is
modeled as a vector translation operating on the embeddings of compatible items
from the corresponding categories. By this way, we not only capture the
specific notion of compatibility conditioned on a specific pair of
complementary categories, but also preserve the global notion of compatibility.
We also design a deep fashion item encoder which exploits the complementary
characteristic of visual and textual features to represent the fashion
products. To the best of our knowledge, this is the first work that uses
category-specific complementary relations to model the category-aware
compatibility between items in a translation-based embedding space. Extensive
experiments demonstrate the effectiveness of TransNFCM over the
state-of-the-arts on two real-world datasets.Comment: Accepted in AAAI 2019 conferenc
- …