Due to its geometric properties, hyperbolic space can support high-fidelity
embeddings of tree- and graph-structured data. As a result, various hyperbolic
networks have been developed which outperform Euclidean networks on many tasks:
e.g. hyperbolic graph convolutional networks (GCN) can outperform vanilla GCN
on some graph learning tasks. However, most existing hyperbolic networks are
complicated, computationally expensive, and numerically unstable -- and they
cannot scale to large graphs due to these shortcomings. With more and more
hyperbolic networks proposed, it is becoming less and less clear what key
component is necessary to make the model behave. In this paper, we propose
HyLa, a simple and minimal approach to using hyperbolic space in networks: HyLa
maps once from a hyperbolic-space embedding to Euclidean space via the
eigenfunctions of the Laplacian operator in the hyperbolic space. We evaluate
HyLa on graph learning tasks including node classification and text
classification, where HyLa can be used together with any graph neural networks.
When used with a linear model, HyLa shows significant improvements over
hyperbolic networks and other baselines