452 research outputs found
Connector 0.5: A unified framework for graph representation learning
Graph representation learning models aim to represent the graph structure and
its features into low-dimensional vectors in a latent space, which can benefit
various downstream tasks, such as node classification and link prediction. Due
to its powerful graph data modelling capabilities, various graph embedding
models and libraries have been proposed to learn embeddings and help
researchers ease conducting experiments. In this paper, we introduce a novel
graph representation framework covering various graph embedding models, ranging
from shallow to state-of-the-art models, namely Connector. First, we consider
graph generation by constructing various types of graphs with different
structural relations, including homogeneous, signed, heterogeneous, and
knowledge graphs. Second, we introduce various graph representation learning
models, ranging from shallow to deep graph embedding models. Finally, we plan
to build an efficient open-source framework that can provide deep graph
embedding models to represent structural relations in graphs. The framework is
available at https://github.com/NSLab-CUK/Connector.Comment: An unified framework for graph representation learnin
Learning Disentangled Representations in Signed Directed Graphs without Social Assumptions
Signed graphs are complex systems that represent trust relationships or
preferences in various domains. Learning node representations in such graphs is
crucial for many mining tasks. Although real-world signed relationships can be
influenced by multiple latent factors, most existing methods often oversimplify
the modeling of signed relationships by relying on social theories and treating
them as simplistic factors. This limits their expressiveness and their ability
to capture the diverse factors that shape these relationships. In this paper,
we propose DINES, a novel method for learning disentangled node representations
in signed directed graphs without social assumptions. We adopt a disentangled
framework that separates each embedding into distinct factors, allowing for
capturing multiple latent factors. We also explore lightweight graph
convolutions that focus solely on sign and direction, without depending on
social theories. Additionally, we propose a decoder that effectively classifies
an edge's sign by considering correlations between the factors. To further
enhance disentanglement, we jointly train a self-supervised factor
discriminator with our encoder and decoder. Throughout extensive experiments on
real-world signed directed graphs, we show that DINES effectively learns
disentangled node representations, and significantly outperforms its
competitors in the sign prediction task.Comment: 26 pages, 11 figure
HHMF: hidden hierarchical matrix factorization for recommender systems
Abstract(#br)Matrix factorization (MF) is one of the most powerful techniques used in recommender systems. MF models the (user, item) interactions behind historical explicit or implicit ratings. Standard MF does not capture the hierarchical structural correlations, such as publisher and advertiser in advertisement recommender systems, or the taxonomy (e.g., tracks, albums, artists, genres) in music recommender systems. There are a few hierarchical MF approaches, but they require the hierarchical structures to be known beforehand. In this paper, we propose a Hidden Hierarchical Matrix Factorization (HHMF) technique, which learns the hidden hierarchical structure from the user-item rating records. HHMF does not require the prior knowledge of hierarchical structure; hence, as opposed to..
- …