Hypergraphs are a powerful abstraction for representing higher-order
interactions between entities of interest. To exploit these relationships in
making downstream predictions, a variety of hypergraph neural network
architectures have recently been proposed, in large part building upon
precursors from the more traditional graph neural network (GNN) literature.
Somewhat differently, in this paper we begin by presenting an expressive family
of parameterized, hypergraph-regularized energy functions. We then demonstrate
how minimizers of these energies effectively serve as node embeddings that,
when paired with a parameterized classifier, can be trained end-to-end via a
supervised bilevel optimization process. Later, we draw parallels between the
implicit architecture of the predictive models emerging from the proposed
bilevel hypergraph optimization, and existing GNN architectures in common use.
Empirically, we demonstrate state-of-the-art results on various hypergraph node
classification benchmarks. Code is available at
https://github.com/yxzwang/PhenomNN.Comment: Accepted to ICML 202