Temporal graph neural networks have shown promising results in learning
inductive representations by automatically extracting temporal patterns.
However, previous works often rely on complex memory modules or inefficient
random walk methods to construct temporal representations. In addition, the
existing dynamic graph encoders are non-trivial to adapt to self-supervised
paradigms, which prevents them from utilizing unlabeled data. To address these
limitations, we present an efficient yet effective attention-based encoder that
leverages temporal edge encodings and window-based subgraph sampling to
generate task-agnostic embeddings. Moreover, we propose a joint-embedding
architecture using non-contrastive SSL to learn rich temporal embeddings
without labels. Experimental results on 7 benchmark datasets indicate that on
average, our model outperforms SoTA baselines on the future link prediction
task by 4.23% for the transductive setting and 3.30% for the inductive setting
while only requiring 5-10x less training/inference time. Additionally, we
empirically validate the SSL pre-training significance under two probings
commonly used in language and vision modalities. Lastly, different aspects of
the proposed framework are investigated through experimental analysis and
ablation studies.Comment: Proceedings of the 19th International Workshop on Mining and Learning
with Graphs (MLG