2 research outputs found
HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers
Many graph representation learning (GRL) problems are dynamic, with millions
of edges added or removed per second. A fundamental workload in this setting is
dynamic link prediction: using a history of graph updates to predict whether a
given pair of vertices will become connected. Recent schemes for link
prediction in such dynamic settings employ Transformers, modeling individual
graph updates as single tokens. In this work, we propose HOT: a model that
enhances this line of works by harnessing higher-order (HO) graph structures;
specifically, k-hop neighbors and more general subgraphs containing a given
pair of vertices. Harnessing such HO structures by encoding them into the
attention matrix of the underlying Transformer results in higher accuracy of
link prediction outcomes, but at the expense of increased memory pressure. To
alleviate this, we resort to a recent class of schemes that impose hierarchy on
the attention matrix, significantly reducing memory footprint. The final design
offers a sweetspot between high accuracy and low memory utilization. HOT
outperforms other dynamic GRL schemes, for example achieving 9%, 7%, and 15%
higher accuracy than - respectively - DyGFormer, TGN, and GraphMixer, for the
MOOC dataset. Our design can be seamlessly extended towards other dynamic GRL
workloads
Practice of streaming processing of dynamic graphs: concepts, models, and systems
http://arxiv.org/abs/1912.12740Published versio