3 research outputs found
Forgetting to learn logic programs
Most program induction approaches require predefined, often hand-engineered,
background knowledge (BK). To overcome this limitation, we explore methods to
automatically acquire BK through multi-task learning. In this approach, a
learner adds learned programs to its BK so that they can be reused to help
learn other programs. To improve learning performance, we explore the idea of
forgetting, where a learner can additionally remove programs from its BK. We
consider forgetting in an inductive logic programming (ILP) setting. We show
that forgetting can significantly reduce both the size of the hypothesis space
and the sample complexity of an ILP learner. We introduce Forgetgol, a
multi-task ILP learner which supports forgetting. We experimentally compare
Forgetgol against approaches that either remember or forget everything. Our
experimental results show that Forgetgol outperforms the alternative approaches
when learning from over 10,000 tasks.Comment: AAAI2
Clustering-based relational unsupervised representation learning with an explicit distributed representation
Latent features learned by deep learning approaches have
proven to be a powerful tool for machine learning. They serve as a data abstraction that makes learning easier by capturing regularities in data explicitly. Their benefits motivated their adaptation to relational learning context. In our previous work, we introduce an approach that learns
relational latent features by means of clustering instances and their relations. The major drawback of latent representations is that they are often black-box and difficult to interpret. This work addresses these issues and shows that (1) latent features created by clustering are interpretable and capture interesting properties of data; (2) they identify local regions of instances that match well with the label, which partially explains their
benefit; and (3) although the number of latent features generated by this approach is large, often many of them are highly redundant and can be removed without hurting performance much.Published papers trackstatus: publishe
Inductive logic programming at 30: a new introduction
Inductive logic programming (ILP) is a form of machine learning. The goal of
ILP is to induce a hypothesis (a set of logical rules) that generalises
training examples. As ILP turns 30, we provide a new introduction to the field.
We introduce the necessary logical notation and the main learning settings;
describe the building blocks of an ILP system; compare several systems on
several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol);
highlight key application areas; and, finally, summarise current limitations
and directions for future research.Comment: Paper under revie