9 research outputs found
Knowledge Refactoring for Inductive Program Synthesis
Humans constantly restructure knowledge to use it more efficiently. Our goal
is to give a machine learning system similar abilities so that it can learn
more efficiently. We introduce the \textit{knowledge refactoring} problem,
where the goal is to restructure a learner's knowledge base to reduce its size
and to minimise redundancy in it. We focus on inductive logic programming,
where the knowledge base is a logic program. We introduce Knorf, a system which
solves the refactoring problem using constraint optimisation. We evaluate our
approach on two program induction domains: real-world string transformations
and building Lego structures. Our experiments show that learning from
refactored knowledge can improve predictive accuracies fourfold and reduce
learning times by half.Comment: 7 pages, 6 figure
Forgetting to learn logic programs
Most program induction approaches require predefined, often hand-engineered,
background knowledge (BK). To overcome this limitation, we explore methods to
automatically acquire BK through multi-task learning. In this approach, a
learner adds learned programs to its BK so that they can be reused to help
learn other programs. To improve learning performance, we explore the idea of
forgetting, where a learner can additionally remove programs from its BK. We
consider forgetting in an inductive logic programming (ILP) setting. We show
that forgetting can significantly reduce both the size of the hypothesis space
and the sample complexity of an ILP learner. We introduce Forgetgol, a
multi-task ILP learner which supports forgetting. We experimentally compare
Forgetgol against approaches that either remember or forget everything. Our
experimental results show that Forgetgol outperforms the alternative approaches
when learning from over 10,000 tasks.Comment: AAAI2
Inductive logic programming at 30: a new introduction
Inductive logic programming (ILP) is a form of machine learning. The goal of
ILP is to induce a hypothesis (a set of logical rules) that generalises
training examples. As ILP turns 30, we provide a new introduction to the field.
We introduce the necessary logical notation and the main learning settings;
describe the building blocks of an ILP system; compare several systems on
several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol);
highlight key application areas; and, finally, summarise current limitations
and directions for future research.Comment: Paper under revie
Efficient instance and hypothesis space revision in Meta-Interpretive Learning
Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL.
First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces