Many gradient-based meta-learning methods assume a set of parameters that do
not participate in inner-optimization, which can be considered as
hyperparameters. Although such hyperparameters can be optimized using the
existing gradient-based hyperparameter optimization (HO) methods, they suffer
from the following issues. Unrolled differentiation methods do not scale well
to high-dimensional hyperparameters or horizon length, Implicit Function
Theorem (IFT) based methods are restrictive for online optimization, and short
horizon approximations suffer from short horizon bias. In this work, we propose
a novel HO method that can overcome these limitations, by approximating the
second-order term with knowledge distillation. Specifically, we parameterize a
single Jacobian-vector product (JVP) for each HO step and minimize the distance
from the true second-order term. Our method allows online optimization and also
is scalable to the hyperparameter dimension and the horizon length. We
demonstrate the effectiveness of our method on two different meta-learning
methods and three benchmark datasets