Classical distillation methods transfer representations from a "teacher"
neural network to a "student" network by matching their output activations.
Recent methods also match the Jacobians, or the gradient of output activations
with the input. However, this involves making some ad hoc decisions, in
particular, the choice of the loss function.
In this paper, we first establish an equivalence between Jacobian matching
and distillation with input noise, from which we derive appropriate loss
functions for Jacobian matching. We then rely on this analysis to apply
Jacobian matching to transfer learning by establishing equivalence of a recent
transfer learning procedure to distillation.
We then show experimentally on standard image datasets that Jacobian-based
penalties improve distillation, robustness to noisy inputs, and transfer
learning