154 research outputs found

    Logical Reduction of Metarules

    Get PDF
    International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Constructive approaches to Program Induction

    Get PDF
    Search is a key technique in artificial intelligence, machine learning and Program Induction. No matter how efficient a search procedure, there exist spaces that are too large to search effectively and they include the search space of programs. In this dissertation we show that in the context of logic-program induction (Inductive Logic Programming, or ILP) it is not necessary to search for a correct program, because if one exists, there also exists a unique object that is the most general correct program, and that can be constructed directly, without a search, in polynomial time and from a polynomial number of examples. The existence of this unique object, that we term the Top Program because of its maximal generality, does not so much solve the problem of searching a large program search space, as it completely sidesteps it, thus improving the efficiency of the learning task by orders of magnitude commensurate with the complexity of a program space search. The existence of a unique Top Program and the ability to construct it given finite resources relies on the imposition, on the language of hypotheses, from which programs are constructed, of a strong inductive bias with relevance to the learning task. In common practice, in machine learning, Program Induction and ILP, such relevant inductive bias is selected, or created, manually, by the human user of a learning system, with intuition or knowledge of the problem domain, and in the form of various kinds of program templates. In this dissertation we show that by abandoning the reliance on such extra-logical devices as program templates, and instead defining inductive bias exclusively as First- and Higher-Order Logic formulae, it is possible to learn inductive bias itself from examples, automatically, and efficiently, by Higher-Order Top Program construction. In Chapter 4 we describe the Top Program in the context of the Meta-Interpretive Learning approach to ILP (MIL) and describe an algorithm for its construction, the Top Program Construction algorithm (TPC). We prove the efficiency and accuracy of TPC and describe its implementation in a new MIL system called Louise. We support theoretical results with experiments comparing Louise to the state-of-the-art, search-based MIL system, Metagol, and find that Louise improves Metagol’s efficiency and accuracy. In Chapter 5 we re-frame MIL as specialisation of metarules, Second-Order clauses used as inductive bias in MIL, and prove that problem-specific metarules can be derived by specialisation of maximally general metarules, by MIL. We describe a sub-system of Louise, called TOIL, that learns new metarules by MIL and demonstrate empirically that the metarules learned by TOIL match those selected manually, while maintaining the accuracy and efficiency of learning. iOpen Acces

    Inductive logic programming at 30: a new introduction

    Full text link
    Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and, finally, summarise current limitations and directions for future research.Comment: Paper under revie
    • …
    corecore