Graph Neural Networks (GNNs) have emerged as a powerful representation
learning framework for graph-structured data. A key limitation of conventional
GNNs is their representation of each node with a singular feature vector,
potentially overlooking intricate details about individual node features. Here,
we propose an Attention-based Message-Passing layer for GNNs (AMPNet) that
encodes individual features per node and models feature-level interactions
through cross-node attention during message-passing steps. We demonstrate the
abilities of AMPNet through extensive benchmarking on real-world biological
systems such as fMRI brain activity recordings and spatial genomic data,
improving over existing baselines by 20% on fMRI signal reconstruction, and
further improving another 8% with positional embedding added. Finally, we
validate the ability of AMPNet to uncover meaningful feature-level interactions
through case studies on biological systems. We anticipate that our architecture
will be highly applicable to graph-structured data where node entities
encompass rich feature-level information.Comment: 16 pages (12 + 4 pages appendix). 5 figures and 7 table