Learning causal structure poses a combinatorial search problem that typically
involves evaluating structures using a score or independence test. The
resulting search is costly, and designing suitable scores or tests that capture
prior knowledge is difficult. In this work, we propose to amortize the process
of causal structure learning. Rather than searching over causal structures
directly, we train a variational inference model to predict the causal
structure from observational/interventional data. Our inference model acquires
domain-specific inductive bias for causal discovery solely from data generated
by a simulator. This allows us to bypass both the search over graphs and the
hand-engineering of suitable score functions. Moreover, the architecture of our
inference model is permutation invariant w.r.t. the data points and permutation
equivariant w.r.t. the variables, facilitating generalization to significantly
larger problem instances than seen during training. On synthetic data and
semi-synthetic gene expression data, our models exhibit robust generalization
capabilities under substantial distribution shift and significantly outperform
existing algorithms, especially in the challenging genomics domain