131 research outputs found
Umbilical hypersurfaces of Minkowski spaces
In this paper, by the Gauss equation of the induced Chern
connection for Finsler submanifolds, we prove that if M is an
umbilical hypersurface of a Minkowski space , then either M is a Riemannian space form or a locally Minkowski space
The special hypersurfaces of Minkowski space
Let x:(Mn,F) (Vn+1, F̄) be a simply connected hypersurface in a Minkowski space (Vn+1, F̄). In this paper, using the Gauss formula of Chern connection on Finsler submanifolds, we shall prove that if x(p) is normal to Tp(M) (∀ p M), then M with the induced metric is isometric to the standard Euclidean sphere
The Devil is in the Data: Learning Fair Graph Neural Networks via Partial Knowledge Distillation
Graph neural networks (GNNs) are being increasingly used in many high-stakes
tasks, and as a result, there is growing attention on their fairness recently.
GNNs have been shown to be unfair as they tend to make discriminatory decisions
toward certain demographic groups, divided by sensitive attributes such as
gender and race. While recent works have been devoted to improving their
fairness performance, they often require accessible demographic information.
This greatly limits their applicability in real-world scenarios due to legal
restrictions. To address this problem, we present a demographic-agnostic method
to learn fair GNNs via knowledge distillation, namely FairGKD. Our work is
motivated by the empirical observation that training GNNs on partial data
(i.e., only node attributes or topology data) can improve their fairness,
albeit at the cost of utility. To make a balanced trade-off between fairness
and utility performance, we employ a set of fairness experts (i.e., GNNs
trained on different partial data) to construct the synthetic teacher, which
distills fairer and informative knowledge to guide the learning of the GNN
student. Experiments on several benchmark datasets demonstrate that FairGKD,
which does not require access to demographic information, significantly
improves the fairness of GNNs by a large margin while maintaining their
utility.Comment: Accepted by WSDM 202
Scaling Up, Scaling Deep: Blockwise Graph Contrastive Learning
Oversmoothing is a common phenomenon in graph neural networks (GNNs), in
which an increase in the network depth leads to a deterioration in their
performance. Graph contrastive learning (GCL) is emerging as a promising way of
leveraging vast unlabeled graph data. As a marriage between GNNs and
contrastive learning, it remains unclear whether GCL inherits the same
oversmoothing defect from GNNs. This work undertakes a fundamental analysis of
GCL from the perspective of oversmoothing on the first hand. We demonstrate
empirically that increasing network depth in GCL also leads to oversmoothing in
their deep representations, and surprisingly, the shallow ones. We refer to
this phenomenon in GCL as long-range starvation', wherein lower layers in deep
networks suffer from degradation due to the lack of sufficient guidance from
supervision (e.g., loss computing). Based on our findings, we present BlockGCL,
a remarkably simple yet effective blockwise training framework to prevent GCL
from notorious oversmoothing. Without bells and whistles, BlockGCL consistently
improves robustness and stability for well-established GCL methods with
increasing numbers of layers on real-world graph benchmarks. We believe our
work will provide insights for future improvements of scalable and deep GCL
frameworks.Comment: Preprint; Code is available at
https://github.com/EdisonLeeeee/BlockGC
SAILOR: Structural Augmentation Based Tail Node Representation Learning
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in
representation learning for graphs recently. However, the effectiveness of
GNNs, which capitalize on the key operation of message propagation, highly
depends on the quality of the topology structure. Most of the graphs in
real-world scenarios follow a long-tailed distribution on their node degrees,
that is, a vast majority of the nodes in the graph are tail nodes with only a
few connected edges. GNNs produce inferior node representations for tail nodes
since they lack structural information. In the pursuit of promoting the
expressiveness of GNNs for tail nodes, we explore how the deficiency of
structural information deteriorates the performance of tail nodes and propose a
general Structural Augmentation based taIL nOde Representation learning
framework, dubbed as SAILOR, which can jointly learn to augment the graph
structure and extract more informative representations for tail nodes.
Extensive experiments on public benchmark datasets demonstrate that SAILOR can
significantly improve the tail node representations and outperform the
state-of-the-art baselines.Comment: Accepted by CIKM 2023; Code is available at
https://github.com/Jie-Re/SAILO
Spectral Adversarial Training for Robust Graph Neural Network
Recent studies demonstrate that Graph Neural Networks (GNNs) are vulnerable
to slight but adversarially designed perturbations, known as adversarial
examples. To address this issue, robust training methods against adversarial
examples have received considerable attention in the literature.
\emph{Adversarial Training (AT)} is a successful approach to learning a robust
model using adversarially perturbed training samples. Existing AT methods on
GNNs typically construct adversarial perturbations in terms of graph structures
or node features. However, they are less effective and fraught with challenges
on graph data due to the discreteness of graph structure and the relationships
between connected examples. In this work, we seek to address these challenges
and propose Spectral Adversarial Training (SAT), a simple yet effective
adversarial training approach for GNNs. SAT first adopts a low-rank
approximation of the graph structure based on spectral decomposition, and then
constructs adversarial perturbations in the spectral domain rather than
directly manipulating the original graph structure. To investigate its
effectiveness, we employ SAT on three widely used GNNs. Experimental results on
four public graph datasets demonstrate that SAT significantly improves the
robustness of GNNs against adversarial attacks without sacrificing
classification accuracy and training efficiency.Comment: Accepted by TKDE. Code availiable at
https://github.com/EdisonLeeeee/SA
- …