8,528 research outputs found
Neural Graph Collaborative Filtering
Learning vector representations (aka. embeddings) of users and items lies at
the core of modern recommender systems. Ranging from early matrix factorization
to recently emerged deep learning based methods, existing efforts typically
obtain a user's (or an item's) embedding by mapping from pre-existing features
that describe the user (or the item), such as ID and attributes. We argue that
an inherent drawback of such methods is that, the collaborative signal, which
is latent in user-item interactions, is not encoded in the embedding process.
As such, the resultant embeddings may not be sufficient to capture the
collaborative filtering effect.
In this work, we propose to integrate the user-item interactions -- more
specifically the bipartite graph structure -- into the embedding process. We
develop a new recommendation framework Neural Graph Collaborative Filtering
(NGCF), which exploits the user-item graph structure by propagating embeddings
on it. This leads to the expressive modeling of high-order connectivity in
user-item graph, effectively injecting the collaborative signal into the
embedding process in an explicit manner. We conduct extensive experiments on
three public benchmarks, demonstrating significant improvements over several
state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further
analysis verifies the importance of embedding propagation for learning better
user and item representations, justifying the rationality and effectiveness of
NGCF. Codes are available at
https://github.com/xiangwang1223/neural_graph_collaborative_filtering.Comment: SIGIR 2019; the latest version of NGCF paper, which is distinct from
the version published in ACM Digital Librar
A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and Future Directions
Graphs represent interconnected structures prevalent in a myriad of
real-world scenarios. Effective graph analytics, such as graph learning
methods, enables users to gain profound insights from graph data, underpinning
various tasks including node classification and link prediction. However, these
methods often suffer from data imbalance, a common issue in graph data where
certain segments possess abundant data while others are scarce, thereby leading
to biased learning outcomes. This necessitates the emerging field of imbalanced
learning on graphs, which aims to correct these data distribution skews for
more accurate and representative learning outcomes. In this survey, we embark
on a comprehensive review of the literature on imbalanced learning on graphs.
We begin by providing a definitive understanding of the concept and related
terminologies, establishing a strong foundational understanding for readers.
Following this, we propose two comprehensive taxonomies: (1) the problem
taxonomy, which describes the forms of imbalance we consider, the associated
tasks, and potential solutions; (2) the technique taxonomy, which details key
strategies for addressing these imbalances, and aids readers in their method
selection process. Finally, we suggest prospective future directions for both
problems and techniques within the sphere of imbalanced learning on graphs,
fostering further innovation in this critical area.Comment: The collection of awesome literature on imbalanced learning on
graphs: https://github.com/Xtra-Computing/Awesome-Literature-ILoG
Search Efficient Binary Network Embedding
Traditional network embedding primarily focuses on learning a dense vector
representation for each node, which encodes network structure and/or node
content information, such that off-the-shelf machine learning algorithms can be
easily applied to the vector-format node representations for network analysis.
However, the learned dense vector representations are inefficient for
large-scale similarity search, which requires to find the nearest neighbor
measured by Euclidean distance in a continuous vector space. In this paper, we
propose a search efficient binary network embedding algorithm called BinaryNE
to learn a sparse binary code for each node, by simultaneously modeling node
context relations and node attribute relations through a three-layer neural
network. BinaryNE learns binary node representations efficiently through a
stochastic gradient descent based online learning algorithm. The learned binary
encoding not only reduces memory usage to represent each node, but also allows
fast bit-wise comparisons to support much quicker network node search compared
to Euclidean distance or other distance measures. Our experiments and
comparisons show that BinaryNE not only delivers more than 23 times faster
search speed, but also provides comparable or better search quality than
traditional continuous vector based network embedding methods
- …