14 research outputs found
Graphing else matters: exploiting aspect opinions and ratings in explainable graph-based recommendations
The success of neural network embeddings has entailed a renewed interest in
using knowledge graphs for a wide variety of machine learning and information
retrieval tasks. In particular, current recommendation methods based on graph
embeddings have shown state-of-the-art performance. These methods commonly
encode latent rating patterns and content features. Different from previous
work, in this paper, we propose to exploit embeddings extracted from graphs
that combine information from ratings and aspect-based opinions expressed in
textual reviews. We then adapt and evaluate state-of-the-art graph embedding
techniques over graphs generated from Amazon and Yelp reviews on six domains,
outperforming baseline recommenders. Our approach has the advantage of
providing explanations which leverage aspect-based opinions given by users
about recommended items. Furthermore, we also provide examples of the
applicability of recommendations utilizing aspect opinions as explanations in a
visualization dashboard, which allows obtaining information about the most and
least liked aspects of similar users obtained from the embeddings of an input
graph
MQuinE: a cure for "Z-paradox" in knowledge graph embedding models
Knowledge graph embedding (KGE) models achieved state-of-the-art results on
many knowledge graph tasks including link prediction and information retrieval.
Despite the superior performance of KGE models in practice, we discover a
deficiency in the expressiveness of some popular existing KGE models called
\emph{Z-paradox}. Motivated by the existence of Z-paradox, we propose a new KGE
model called \emph{MQuinE} that does not suffer from Z-paradox while preserves
strong expressiveness to model various relation patterns including
symmetric/asymmetric, inverse, 1-N/N-1/N-N, and composition relations with
theoretical justification. Experiments on real-world knowledge bases indicate
that Z-paradox indeed degrades the performance of existing KGE models, and can
cause more than 20\% accuracy drop on some challenging test samples. Our
experiments further demonstrate that MQuinE can mitigate the negative impact of
Z-paradox and outperform existing KGE models by a visible margin on link
prediction tasks.Comment: 18pages, 1 figur
CAFE: Coarse-to-Fine Neural Symbolic Reasoning for Explainable Recommendation
Recent research explores incorporating knowledge graphs (KG) into e-commerce
recommender systems, not only to achieve better recommendation performance, but
more importantly to generate explanations of why particular decisions are made.
This can be achieved by explicit KG reasoning, where a model starts from a user
node, sequentially determines the next step, and walks towards an item node of
potential interest to the user. However, this is challenging due to the huge
search space, unknown destination, and sparse signals over the KG, so
informative and effective guidance is needed to achieve a satisfactory
recommendation quality. To this end, we propose a CoArse-to-FinE neural
symbolic reasoning approach (CAFE). It first generates user profiles as coarse
sketches of user behaviors, which subsequently guide a path-finding process to
derive reasoning paths for recommendations as fine-grained predictions. User
profiles can capture prominent user behaviors from the history, and provide
valuable signals about which kinds of path patterns are more likely to lead to
potential items of interest for the user. To better exploit the user profiles,
an improved path-finding algorithm called Profile-guided Path Reasoning (PPR)
is also developed, which leverages an inventory of neural symbolic reasoning
modules to effectively and efficiently find a batch of paths over a large-scale
KG. We extensively experiment on four real-world benchmarks and observe
substantial gains in the recommendation performance compared with
state-of-the-art methods.Comment: Accepted in CIKM 202