In adversarial training, a set of models learn together by pursuing competing
goals, usually defined on single data instances. However, in relational
learning and other non-i.i.d domains, goals can also be defined over sets of
instances. For example, a link predictor for the is-a relation needs to be
consistent with the transitivity property: if is-a(x_1, x_2) and is-a(x_2, x_3)
hold, is-a(x_1, x_3) needs to hold as well. Here we use such assumptions for
deriving an inconsistency loss, measuring the degree to which the model
violates the assumptions on an adversarially-generated set of examples. The
training objective is defined as a minimax problem, where an adversary finds
the most offending adversarial examples by maximising the inconsistency loss,
and the model is trained by jointly minimising a supervised loss and the
inconsistency loss on the adversarial examples. This yields the first method
that can use function-free Horn clauses (as in Datalog) to regularise any
neural link predictor, with complexity independent of the domain size. We show
that for several link prediction models, the optimisation problem faced by the
adversary has efficient closed-form solutions. Experiments on link prediction
benchmarks indicate that given suitable prior knowledge, our method can
significantly improve neural link predictors on all relevant metrics.Comment: Proceedings of the 33rd Conference on Uncertainty in Artificial
Intelligence (UAI), 201