Many anatomical structures can be described by surface or volume meshes.
Machine learning is a promising tool to extract information from these 3D
models. However, high-fidelity meshes often contain hundreds of thousands of
vertices, which creates unique challenges in building deep neural network
architectures. Furthermore, patient-specific meshes may not be canonically
aligned which limits the generalisation of machine learning algorithms. We
propose LaB-GATr, a transfomer neural network with geometric tokenisation that
can effectively learn with large-scale (bio-)medical surface and volume meshes
through sequence compression and interpolation. Our method extends the recently
proposed geometric algebra transformer (GATr) and thus respects all Euclidean
symmetries, i.e. rotation, translation and reflection, effectively mitigating
the problem of canonical alignment between patients. LaB-GATr achieves
state-of-the-art results on three tasks in cardiovascular hemodynamics
modelling and neurodevelopmental phenotype prediction, featuring meshes of up
to 200,000 vertices. Our results demonstrate that LaB-GATr is a powerful
architecture for learning with high-fidelity meshes which has the potential to
enable interesting downstream applications. Our implementation is publicly
available