Most program induction approaches require predefined, often hand-engineered,
background knowledge (BK). To overcome this limitation, we explore methods to
automatically acquire BK through multi-task learning. In this approach, a
learner adds learned programs to its BK so that they can be reused to help
learn other programs. To improve learning performance, we explore the idea of
forgetting, where a learner can additionally remove programs from its BK. We
consider forgetting in an inductive logic programming (ILP) setting. We show
that forgetting can significantly reduce both the size of the hypothesis space
and the sample complexity of an ILP learner. We introduce Forgetgol, a
multi-task ILP learner which supports forgetting. We experimentally compare
Forgetgol against approaches that either remember or forget everything. Our
experimental results show that Forgetgol outperforms the alternative approaches
when learning from over 10,000 tasks.Comment: AAAI2