15,414 research outputs found
Few-shot Message-Enhanced Contrastive Learning for Graph Anomaly Detection
Graph anomaly detection plays a crucial role in identifying exceptional
instances in graph data that deviate significantly from the majority. It has
gained substantial attention in various domains of information security,
including network intrusion, financial fraud, and malicious comments, et al.
Existing methods are primarily developed in an unsupervised manner due to the
challenge in obtaining labeled data. For lack of guidance from prior knowledge
in unsupervised manner, the identified anomalies may prove to be data noise or
individual data instances. In real-world scenarios, a limited batch of labeled
anomalies can be captured, making it crucial to investigate the few-shot
problem in graph anomaly detection. Taking advantage of this potential, we
propose a novel few-shot Graph Anomaly Detection model called FMGAD (Few-shot
Message-Enhanced Contrastive-based Graph Anomaly Detector). FMGAD leverages a
self-supervised contrastive learning strategy within and across views to
capture intrinsic and transferable structural representations. Furthermore, we
propose the Deep-GNN message-enhanced reconstruction module, which extensively
exploits the few-shot label information and enables long-range propagation to
disseminate supervision signals to deeper unlabeled nodes. This module in turn
assists in the training of self-supervised contrastive learning. Comprehensive
experimental results on six real-world datasets demonstrate that FMGAD can
achieve better performance than other state-of-the-art methods, regardless of
artificially injected anomalies or domain-organic anomalies
Prototype Propagation Networks (PPN) for Weakly-supervised Few-shot Learning on Category Graph
A variety of machine learning applications expect to achieve rapid learning
from a limited number of labeled data. However, the success of most current
models is the result of heavy training on big data. Meta-learning addresses
this problem by extracting common knowledge across different tasks that can be
quickly adapted to new tasks. However, they do not fully explore
weakly-supervised information, which is usually free or cheap to collect. In
this paper, we show that weakly-labeled data can significantly improve the
performance of meta-learning on few-shot classification. We propose prototype
propagation network (PPN) trained on few-shot tasks together with data
annotated by coarse-label. Given a category graph of the targeted fine-classes
and some weakly-labeled coarse-classes, PPN learns an attention mechanism which
propagates the prototype of one class to another on the graph, so that the
K-nearest neighbor (KNN) classifier defined on the propagated prototypes
results in high accuracy across different few-shot tasks. The training tasks
are generated by subgraph sampling, and the training objective is obtained by
accumulating the level-wise classification loss on the subgraph. The resulting
graph of prototypes can be continually re-used and updated for new tasks and
classes. We also introduce two practical test/inference settings which differ
according to whether the test task can leverage any weakly-supervised
information as in training. On two benchmarks, PPN significantly outperforms
most recent few-shot learning methods in different settings, even when they are
also allowed to train on weakly-labeled data.Comment: Accepted to IJCAI 201
Graph Few-shot Learning via Knowledge Transfer
Towards the challenging problem of semi-supervised node classification, there
have been extensive studies. As a frontier, Graph Neural Networks (GNNs) have
aroused great interest recently, which update the representation of each node
by aggregating information of its neighbors. However, most GNNs have shallow
layers with a limited receptive field and may not achieve satisfactory
performance especially when the number of labeled nodes is quite small. To
address this challenge, we innovatively propose a graph few-shot learning (GFL)
algorithm that incorporates prior knowledge learned from auxiliary graphs to
improve classification accuracy on the target graph. Specifically, a
transferable metric space characterized by a node embedding and a
graph-specific prototype embedding function is shared between auxiliary graphs
and the target, facilitating the transfer of structural knowledge. Extensive
experiments and ablation studies on four real-world graph datasets demonstrate
the effectiveness of our proposed model.Comment: Full paper (with Appendix) of AAAI 202
- …