Given a resource-rich source graph and a resource-scarce target graph, how
can we effectively transfer knowledge across graphs and ensure a good
generalization performance? In many high-impact domains (e.g., brain networks
and molecular graphs), collecting and annotating data is prohibitively
expensive and time-consuming, which makes domain adaptation an attractive
option to alleviate the label scarcity issue. In light of this, the
state-of-the-art methods focus on deriving domain-invariant graph
representation that minimizes the domain discrepancy. However, it has recently
been shown that a small domain discrepancy loss may not always guarantee a good
generalization performance, especially in the presence of disparate graph
structures and label distribution shifts. In this paper, we present TRANSNET, a
generic learning framework for augmenting knowledge transfer across graphs. In
particular, we introduce a novel notion named trinity signal that can naturally
formulate various graph signals at different granularity (e.g., node
attributes, edges, and subgraphs). With that, we further propose a domain
unification module together with a trinity-signal mixup scheme to jointly
minimize the domain discrepancy and augment the knowledge transfer across
graphs. Finally, comprehensive empirical results show that TRANSNET outperforms
all existing approaches on seven benchmark datasets by a significant margin