54,058 research outputs found
Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks
We propose a distance supervised relation extraction approach for
long-tailed, imbalanced data which is prevalent in real-world settings. Here,
the challenge is to learn accurate "few-shot" models for classes existing at
the tail of the class distribution, for which little data is available.
Inspired by the rich semantic correlations between classes at the long tail and
those at the head, we take advantage of the knowledge from data-rich classes at
the head of the distribution to boost the performance of the data-poor classes
at the tail. First, we propose to leverage implicit relational knowledge among
class labels from knowledge graph embeddings and learn explicit relational
knowledge using graph convolution networks. Second, we integrate that
relational knowledge into relation extraction model by coarse-to-fine
knowledge-aware attention mechanism. We demonstrate our results for a
large-scale benchmark dataset which show that our approach significantly
outperforms other baselines, especially for long-tail relations.Comment: To be published in NAACL 201
Redundancy-Free Self-Supervised Relational Learning for Graph Clustering
Graph clustering, which learns the node representations for effective cluster
assignments, is a fundamental yet challenging task in data analysis and has
received considerable attention accompanied by graph neural networks in recent
years. However, most existing methods overlook the inherent relational
information among the non-independent and non-identically distributed nodes in
a graph. Due to the lack of exploration of relational attributes, the semantic
information of the graph-structured data fails to be fully exploited which
leads to poor clustering performance. In this paper, we propose a novel
self-supervised deep graph clustering method named Relational Redundancy-Free
Graph Clustering (RFGC) to tackle the problem. It extracts the attribute-
and structure-level relational information from both global and local views
based on an autoencoder and a graph autoencoder. To obtain effective
representations of the semantic information, we preserve the consistent
relation among augmented nodes, whereas the redundant relation is further
reduced for learning discriminative embeddings. In addition, a simple yet valid
strategy is utilized to alleviate the over-smoothing issue. Extensive
experiments are performed on widely used benchmark datasets to validate the
superiority of our RFGC over state-of-the-art baselines. Our codes are
available at https://github.com/yisiyu95/R2FGC.Comment: Accepted by IEEE Transactions on Neural Networks and Learning Systems
(TNNLS 2024
Animating the development of Social Networks over time using a dynamic extension of multidimensional scaling
The animation of network visualizations poses technical and theoretical
challenges. Rather stable patterns are required before the mental map enables a
user to make inferences over time. In order to enhance stability, we developed
an extension of stress-minimization with developments over time. This dynamic
layouter is no longer based on linear interpolation between independent static
visualizations, but change over time is used as a parameter in the
optimization. Because of our focus on structural change versus stability the
attention is shifted from the relational graph to the latent eigenvectors of
matrices. The approach is illustrated with animations for the journal citation
environments of Social Networks, the (co-)author networks in the carrying
community of this journal, and the topical development using relations among
its title words. Our results are also compared with animations based on
PajekToSVGAnim and SoNIA
Bot-Mgat: A Transfer Learning Model Based On A Multi-View Graph Attention Network To Detect Social Bots
Twitter, as a popular social network, has been targeted by different bot attacks. Detecting social bots is a challenging task, due to their evolving capacity to avoid detection. Extensive research efforts have proposed different techniques and approaches to solving this problem. Due to the scarcity of recently updated labeled data, the performance of detection systems degrades when exposed to a new dataset. Therefore, semi-supervised learning (SSL) techniques can improve performance, using both labeled and unlabeled examples. In this paper, we propose a framework based on the multi-view graph attention mechanism using a transfer learning (TL) approach, to predict social bots. We called the framework \u27Bot-MGAT\u27, which stands for bot multi-view graph attention network. The framework used both labeled and unlabeled data. We used profile features to reduce the overheads of the feature engineering. We executed our experiments on a recent benchmark dataset that included representative samples of social bots with graph structural information and profile features only. We applied cross-validation to avoid uncertainty in the model\u27s performance. Bot-MGAT was evaluated using graph SSL techniques: single graph attention networks (GAT), graph convolutional networks (GCN), and relational graph convolutional networks (RGCN). We compared Bot-MGAT to related work in the field of bot detection. The results of Bot-MGAT with TL outperformed, with an accuracy score of 97.8%, an F1 score of 0.9842, and an MCC score of 0.9481
Relational Attention: Generalizing Transformers for Graph-Structured Tasks
Transformers flexibly operate over sets of real-valued vectors representing
task-specific entities and their attributes, where each vector might encode one
word-piece token and its position in a sequence, or some piece of information
that carries no position at all. But as set processors, transformers are at a
disadvantage in reasoning over more general graph-structured data where nodes
represent entities and edges represent relations between entities. To address
this shortcoming, we generalize transformer attention to consider and update
edge vectors in each transformer layer. We evaluate this relational transformer
on a diverse array of graph-structured tasks, including the large and
challenging CLRS Algorithmic Reasoning Benchmark. There, it dramatically
outperforms state-of-the-art graph neural networks expressly designed to reason
over graph-structured data. Our analysis demonstrates that these gains are
attributable to relational attention's inherent ability to leverage the greater
expressivity of graphs over sets.Comment: The Eleventh International Conference on Learning Representations,
ICLR'2
- …