250,812 research outputs found
Neural Graph Collaborative Filtering
Learning vector representations (aka. embeddings) of users and items lies at
the core of modern recommender systems. Ranging from early matrix factorization
to recently emerged deep learning based methods, existing efforts typically
obtain a user's (or an item's) embedding by mapping from pre-existing features
that describe the user (or the item), such as ID and attributes. We argue that
an inherent drawback of such methods is that, the collaborative signal, which
is latent in user-item interactions, is not encoded in the embedding process.
As such, the resultant embeddings may not be sufficient to capture the
collaborative filtering effect.
In this work, we propose to integrate the user-item interactions -- more
specifically the bipartite graph structure -- into the embedding process. We
develop a new recommendation framework Neural Graph Collaborative Filtering
(NGCF), which exploits the user-item graph structure by propagating embeddings
on it. This leads to the expressive modeling of high-order connectivity in
user-item graph, effectively injecting the collaborative signal into the
embedding process in an explicit manner. We conduct extensive experiments on
three public benchmarks, demonstrating significant improvements over several
state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further
analysis verifies the importance of embedding propagation for learning better
user and item representations, justifying the rationality and effectiveness of
NGCF. Codes are available at
https://github.com/xiangwang1223/neural_graph_collaborative_filtering.Comment: SIGIR 2019; the latest version of NGCF paper, which is distinct from
the version published in ACM Digital Librar
KGAT: Knowledge Graph Attention Network for Recommendation
To provide more accurate, diverse, and explainable recommendation, it is
compulsory to go beyond modeling user-item interactions and take side
information into account. Traditional methods like factorization machine (FM)
cast it as a supervised learning problem, which assumes each interaction as an
independent instance with side information encoded. Due to the overlook of the
relations among instances or items (e.g., the director of a movie is also an
actor of another movie), these methods are insufficient to distill the
collaborative signal from the collective behaviors of users. In this work, we
investigate the utility of knowledge graph (KG), which breaks down the
independent interaction assumption by linking items with their attributes. We
argue that in such a hybrid structure of KG and user-item graph, high-order
relations --- which connect two items with one or multiple linked attributes
--- are an essential factor for successful recommendation. We propose a new
method named Knowledge Graph Attention Network (KGAT) which explicitly models
the high-order connectivities in KG in an end-to-end fashion. It recursively
propagates the embeddings from a node's neighbors (which can be users, items,
or attributes) to refine the node's embedding, and employs an attention
mechanism to discriminate the importance of the neighbors. Our KGAT is
conceptually advantageous to existing KG-based recommendation methods, which
either exploit high-order relations by extracting paths or implicitly modeling
them with regularization. Empirical results on three public benchmarks show
that KGAT significantly outperforms state-of-the-art methods like Neural FM and
RippleNet. Further studies verify the efficacy of embedding propagation for
high-order relation modeling and the interpretability benefits brought by the
attention mechanism.Comment: KDD 2019 research trac
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Spectral graph theory-based methods represent an important class of tools for
studying the structure of networks. Spectral methods are based on a first-order
Markov chain derived from a random walk on the graph and thus they cannot take
advantage of important higher-order network substructures such as triangles,
cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering
(TSC) algorithm that allows for modeling higher-order network structures in a
graph partitioning framework. Our TSC algorithm allows the user to specify
which higher-order network structures (cycles, feed-forward loops, etc.) should
be preserved by the network clustering. Higher-order network structures of
interest are represented using a tensor, which we then partition by developing
a multilinear spectral method. Our framework can be applied to discovering
layered flows in networks as well as graph anomaly detection, which we
illustrate on synthetic networks. In directed networks, a higher-order
structure of particular interest is the directed 3-cycle, which captures
feedback loops in networks. We demonstrate that our TSC algorithm produces
large partitions that cut fewer directed 3-cycles than standard spectral
clustering algorithms.Comment: SDM 201
Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach
Graph Convolutional Networks (GCNs) are state-of-the-art graph based
representation learning models by iteratively stacking multiple layers of
convolution aggregation operations and non-linear activation operations.
Recently, in Collaborative Filtering (CF) based Recommender Systems (RS), by
treating the user-item interaction behavior as a bipartite graph, some
researchers model higher-layer collaborative signals with GCNs. These GCN based
recommender models show superior performance compared to traditional works.
However, these models suffer from training difficulty with non-linear
activations for large user-item graphs. Besides, most GCN based models could
not model deeper layers due to the over smoothing effect with the graph
convolution operation. In this paper, we revisit GCN based CF models from two
aspects. First, we empirically show that removing non-linearities would enhance
recommendation performance, which is consistent with the theories in simple
graph convolutional networks. Second, we propose a residual network structure
that is specifically designed for CF with user-item interaction modeling, which
alleviates the over smoothing problem in graph convolution aggregation
operation with sparse user-item interaction data. The proposed model is a
linear model and it is easy to train, scale to large datasets, and yield better
efficiency and effectiveness on two real datasets. We publish the source code
at https://github.com/newlei/LRGCCF.Comment: The updated version is publised in AAAI 202
Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation
Session-based recommendation (SBR) focuses on next-item prediction at a
certain time point. As user profiles are generally not available in this
scenario, capturing the user intent lying in the item transitions plays a
pivotal role. Recent graph neural networks (GNNs) based SBR methods regard the
item transitions as pairwise relations, which neglect the complex high-order
information among items. Hypergraph provides a natural way to capture
beyond-pairwise relations, while its potential for SBR has remained unexplored.
In this paper, we fill this gap by modeling session-based data as a hypergraph
and then propose a hypergraph convolutional network to improve SBR. Moreover,
to enhance hypergraph modeling, we devise another graph convolutional network
which is based on the line graph of the hypergraph and then integrate
self-supervised learning into the training of the networks by maximizing mutual
information between the session representations learned via the two networks,
serving as an auxiliary task to improve the recommendation task. Since the two
types of networks both are based on hypergraph, which can be seen as two
channels for hypergraph modeling, we name our model \textbf{DHCN} (Dual Channel
Hypergraph Convolutional Networks). Extensive experiments on three benchmark
datasets demonstrate the superiority of our model over the SOTA methods, and
the results validate the effectiveness of hypergraph modeling and
self-supervised task. The implementation of our model is available at
https://github.com/xiaxin1998/DHCNComment: 9 pages, 4 figures, accepted by AAAI'2
- …