1,086 research outputs found
Class-Balanced and Reinforced Active Learning on Graphs
Graph neural networks (GNNs) have demonstrated significant success in various
applications, such as node classification, link prediction, and graph
classification. Active learning for GNNs aims to query the valuable samples
from the unlabeled data for annotation to maximize the GNNs' performance at a
lower cost. However, most existing algorithms for reinforced active learning in
GNNs may lead to a highly imbalanced class distribution, especially in highly
skewed class scenarios. GNNs trained with class-imbalanced labeled data are
susceptible to bias toward majority classes, and the lower performance of
minority classes may lead to a decline in overall performance. To tackle this
issue, we propose a novel class-balanced and reinforced active learning
framework for GNNs, namely, GCBR. It learns an optimal policy to acquire
class-balanced and informative nodes for annotation, maximizing the performance
of GNNs trained with selected labeled nodes. GCBR designs class-balance-aware
states, as well as a reward function that achieves trade-off between model
performance and class balance. The reinforcement learning algorithm Advantage
Actor-Critic (A2C) is employed to learn an optimal policy stably and
efficiently. We further upgrade GCBR to GCBR++ by introducing a punishment
mechanism to obtain a more class-balanced labeled set. Extensive experiments on
multiple datasets demonstrate the effectiveness of the proposed approaches,
achieving superior performance over state-of-the-art baselines
NHERF1 regulates the progression of colorectal cancer through the interplay with VEGFR2 pathway
The oncogenic role of ectopic expression of Na+/H+ exchanger regulatory factor 1 (NHERF1) was recently suggested in colorectal cancer, where it was implicated in playing a role in the tumor hypoxia microenvironment. Here we showed that a high level expression of NHERF1 was found in colorectal cancer tissues and that the expression of NHERF1 was positively correlated with VEGFR2 expression. The prognostic value of VEGFR2 expression in colorectal cancer relied on the expression of NHERF1. The up-regulation of NHERF1 induced by the exposure to hypoxia in colon cancer cells depended on the activation of VEGFR2 signaling. NHERF1 in turn inhibited the activation of VEGFR2 signaling which could be regulated by the interaction between NHERF1 and VEGFR2, resulting in the reduction of migration and invasion of colon cancer cells. These results suggest a dynamic interplay between NHERF1 and VEGFR2 signaling in colorectal cancer, which could explain the contribution of NHERF1 to the regulation of tumor cell responses to the hypoxia microenvironment
Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks
Graphs have become an important modeling tool for web applications, and Graph
Neural Networks (GNNs) have achieved great success in graph representation
learning. However, the performance of traditional GNNs heavily relies on a
large amount of supervision. Recently, ``pre-train, fine-tune'' has become the
paradigm to address the issues of label dependency and poor generalization.
However, the pre-training strategies vary for graphs with homophily and
heterophily, and the objectives for various downstream tasks also differ. This
leads to a gap between pretexts and downstream tasks, resulting in ``negative
transfer'' and poor performance. Inspired by prompt learning in Natural
Language Processing (NLP), many studies turn to bridge the gap and fully
leverage the pre-trained model. However, existing methods for graph prompting
are tailored to homophily, neglecting inherent heterophily on graphs.
Meanwhile, most of them rely on the randomly initialized prompts, which
negatively impact on the stability. Therefore, we propose Self-Prompt, a
prompting framework for graphs based on the model and data itself. We first
introduce asymmetric graph contrastive learning for pretext to address
heterophily and align the objectives of pretext and downstream tasks. Then we
reuse the component from pre-training phase as the self adapter and introduce
self-prompts based on graph itself for task adaptation. Finally, we conduct
extensive experiments on 11 benchmark datasets to demonstrate its superiority.
We provide our codes at https://github.com/gongchenghua/Self-Pro.Comment: Accepted at ECML-PKDD 202
Are LLMs Effective Backbones for Fine-tuning? An Experimental Investigation of Supervised LLMs on Chinese Short Text Matching
The recent success of Large Language Models (LLMs) has garnered significant
attention in both academia and industry. Prior research on LLMs has primarily
focused on enhancing or leveraging their generalization capabilities in zero-
and few-shot settings. However, there has been limited investigation into
effectively fine-tuning LLMs for a specific natural language understanding task
in supervised settings. In this study, we conduct an experimental analysis by
fine-tuning LLMs for the task of Chinese short text matching. We explore
various factors that influence performance when fine-tuning LLMs, including
task modeling methods, prompt formats, and output formats
- …