48 research outputs found

    Forgetting to learn logic programs

    Full text link
    Most program induction approaches require predefined, often hand-engineered, background knowledge (BK). To overcome this limitation, we explore methods to automatically acquire BK through multi-task learning. In this approach, a learner adds learned programs to its BK so that they can be reused to help learn other programs. To improve learning performance, we explore the idea of forgetting, where a learner can additionally remove programs from its BK. We consider forgetting in an inductive logic programming (ILP) setting. We show that forgetting can significantly reduce both the size of the hypothesis space and the sample complexity of an ILP learner. We introduce Forgetgol, a multi-task ILP learner which supports forgetting. We experimentally compare Forgetgol against approaches that either remember or forget everything. Our experimental results show that Forgetgol outperforms the alternative approaches when learning from over 10,000 tasks.Comment: AAAI2

    Inductive logic programming at 30

    Full text link
    Inductive logic programming (ILP) is a form of logic-based machine learning. The goal of ILP is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we survey recent work in the field. In this survey, we focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs that generalise from few examples, (iii) new approaches for predicate invention, and (iv) the use of different technologies, notably answer set programming and neural networks. We conclude by discussing some of the current limitations of ILP and discuss directions for future research.Comment: Extension of IJCAI20 survey paper. arXiv admin note: substantial text overlap with arXiv:2002.11002, arXiv:2008.0791

    Inductive logic programming at 30: a new introduction

    Full text link
    Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and, finally, summarise current limitations and directions for future research.Comment: Paper under revie

    Efficient instance and hypothesis space revision in Meta-Interpretive Learning

    Get PDF
    Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL. First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces

    Generalisation Through Negation and Predicate Invention

    Full text link
    The ability to generalise from a small number of examples is a fundamental challenge in machine learning. To tackle this challenge, we introduce an inductive logic programming (ILP) approach that combines negation and predicate invention. Combining these two features allows an ILP system to generalise better by learning rules with universally quantified body-only variables. We implement our idea in NOPI, which can learn normal logic programs with predicate invention, including Datalog programs with stratified negation. Our experimental results on multiple domains show that our approach can improve predictive accuracies and learning times.Comment: Under peer-revie

    Generalisation through negation and predicate invention

    Get PDF
    The ability to generalise from a small number of examples is a fundamental challenge in machine learning. To tackle this challenge, we introduce an inductive logic programming (ILP) approach that combines negation and predicate invention. Combining these two features allows an ILP system to generalise better by learning rules with universally quantified body-only variables. We implement our idea in NOPI, which can learn normal logic programs with predicate invention, including Datalog programs with stratified negation. Our experimental results on multiple domains show that our approach can improve predictive accuracies and learning times

    Ein kombinierter analytischer und suchbasierter Ansatz zur induktiven Synthese funktionaler Programme

    Get PDF
    This thesis is concerned with the inductive synthesis of recursive declarative programs and in particular with the analytical inductive synthesis of functional programs. Program synthesis addresses the problem of (semi-)automatically generating computer programs from specifications. In inductive program synthesis, recursive programs are constructed by generalizing over incomplete specifications such as finite sets of input/output examples (I/O examples). Classical methods for induction of functional programs are analytical, that is, a recursive function definition is derived by detecting and generalizing recurrent patterns between the given I/O examples. Most recent methods, on the other side, are generate-and-test based, that is, they repeatedly generate programs independently from the provided I/O examples until a program is found that correctly computes the examples. Analytical methods are much faster than generate-and-test methods, because they do not rely on search in a program space. Therefore, however, the schemas that generatable programs conform to, must be much more restricted. This thesis at first provides a comprehensive overview of current approaches and methods to inductive program synthesis. Then we present a new algorithm to the inductive synthesis of functional programs that generalizes the analytical approach and combines it with search in a program space. Thereby, the strong restrictions of analytical methods can be resolved for the most part. At the same time, applying analytical techniques allows for pruning large parts of the problem space so that solutions can often be found faster than with generate-and-test methods. By means of several experiments with an implementation of the described algorithm, we demonstrate its capabilities.Diese Arbeit befasst sich mit der induktiven Synthese rekursiver deklarativer Programme und speziell mit der analytischen induktiven Synthese funktionaler Programme. Die Programmsynthese beschäftigt sich mit der (semi-)automatischen Konstruktion von Computer-Programmen aus Spezifikationen. In der induktiven Programmsynthese werden rekursive Programme durch das Generalisieren über unvollständige Spezifikationen, wie zum Beispiel endliche Mengen von Eingabe/Ausgabe Beispielen (E/A-Beispielen), generiert. Klassische Methoden der induktiven Synthese funktionaler Programme sind analytisch; eine rekursive Funktionsdefinition wird generiert, indem rekurrente Strukturen zwischen den einzelnen E/A-Beispielen gefunden und generalisiert werden. Die meisten aktuellen Ansätze basieren hingegen auf erzeugen und testen, das heißt, es werden unabhängig von den bereitgestellten E/A-Beispielen solange Programme einer Klasse generiert, bis schließlich ein Programm gefunden wurde das alle Beispiele korrekt berechnet. Analytische Methoden sind sehr viel schneller, weil sie nicht auf Suche in einem Programmraum beruhen. Allerdings müssen dafür auch die Schemata, denen die generierbaren Programme gehorchen, sehr viel beschränkter sein. Diese Arbeit bietet zunächst einen umfassenden Überblick über bestehende Ansätze und Methoden der induktiven Programmsynthese. Anschließend wird ein neuer Algorithmus zur induktiven Synthese funktionaler Programme beschrieben, der den analytischen Ansatz generalisiert und mit Suche in einem Programmraum kombiniert. Dadurch lassen sich die starken Restriktionen des analytischen Ansatzes zu großen Teilen überwinden. Gleichzeitig erlaubt der Einsatz analytischer Techniken das Beschneiden großer Teile des Problemraums, so dass Lösungsprogramme oft schneller gefunden werden können als mit Methoden, die auf erzeugen und testen beruhen. Mittels einer Reihe von Experimenten mit einer Implementation des beschriebenen Algorithmus' werden seine Möglichkeiten gezeigt

    Hybrid Genetic Relational Search for Inductive Learning

    Get PDF
    An important characteristic of all natural systems is the ability to acquire knowledge through experience and to adapt to new situations. Learning is the single unifying theme of all natural systems. One of the basic ways of gaining knowledge is through examples of some concepts.For instance, we may learn how to distinguish a dog from other creatures after that we have seen a number of creatures, and after that someone (a teacher, or supervisor) told us which creatures are dogs and which are not. This way of learning is called supervised learning. Inductive Concept Learning (ICL) constitutes a central topic in machine learning. The problem can be formulated in the following manner: given a description language used to express possible hypotheses, a background knowledge, a set of positive examples, and a set of negative examples, one has to find a hypothesis which covers all positive examples and none of the negative ones. This is a supervised way of learning, since a supervisor has already classified the examples of the concept into positive and negative examples. The so learned concept can be used to classify previously unseen examples. In general deriving general conclusions from specific observation is called induction. Thus in ICL, concepts are induced because obtained from the observation of a limited set of training examples. The process can be seen as a search process. Starting from an initial hypothesis, what is done is searching the space of the possible hypotheses for one that fits the given set of examples. A representation language has to be chosen in order to represent concepts, examples and the background knowledge. This is an important choice, because this may limit the kind of concept we can learn. With a representation language that has a low expressive power we may not be able to represent some problem domain, because too complex for the language adopted. On the other side, a too expressive language may give us the possibility to represent all problem domains. However this solution may also give us too much freedom, in the sense that we can build concepts in too many different ways, and this could lead to the impossibility of finding the right concept. We are interested in learning concepts expressed in a fragment of first--order logic (FOL). This subject is known as Inductive Logic Programming (ILP), where the knowledge to be learn is expressed by Horn clauses, which are used in programming languages based on logic programming like Prolog. Learning systems that use a representation based on first--order logic have been successfully applied to relevant real life problems, e.g., learning a specific property related to carcinogenicity. Learning first--order hypotheses is a hard task, due to the huge search space one has to deal with. The approach used by the majority of ILP systems tries to overcome this problem by using specific search strategies, like the top-down and the inverse resolution mechanism. However, the greedy selection strategies adopted for reducing the computational effort, render techniques based on this approach often incapable of escaping from local optima. An alternative approach is offered by genetic algorithms (GAs). GAs have proved to be successful in solving comparatively hard optimization problems, as well as problems like ICL. GAs represents a good approach when the problems to solve are characterized by a high number of variables, when there is interaction among variables, when there are mixed types of variables, e.g., numerical and nominal, and when the search space presents many local optima. Moreover it is easy to hybridize GAs with other techniques that are known to be good for solving some classes of problems. Another appealing feature of GAs is represented by their intrinsic parallelism, and their use of exploration operators, which give them the possibility of escaping from local optima. However this latter characteristic of GAs is also responsible for their rather poor performance on learning tasks which are easy to tackle by algorithms that use specific search strategies. These observations suggest that the two approaches above described, i.e., standard ILP strategies and GAs, are applicable to partly complementary classes of learning problems. More important, they indicate that a system incorporating features from both approaches could profit from the different benefits of the approaches. This motivates the aim of this thesis, which is to develop a system based on GAs for ILP that incorporates search strategies used in successful ILP systems. Our approach is inspired by memetic algorithms, a population based search method for combinatorial optimization problems. In evolutionary computation memetic algorithms are GAs in which individuals can be refined during their lifetime.Eiben, A.E. [Promotor]Marchiori, E. [Copromotor
    corecore