6 research outputs found

    Automatic Inductive Programming Tutorial

    Get PDF
    Tutorial de 4 horas de duración, aceptado e impartido en: International Conference in Machine Learning ICML 2006.Computers that can program themselves is an old dream of Artificial Intelligence, but only nowadays there is some progress of remark. In relation to Machine Learning, a computer program is the most powerful structure that can be learned, pushing the final goal well beyond neural networks or decision trees. There are currently many separate areas, working independently, related to automatic programming, both deductive and inductive. The first goal of this tutorial is to give to the attendants a comprehensive view of the main areas related to the automatic induction of programs, a view which is not currently available to the community. ML researchers which do not know about Automatic Programming or researchers which work in just one of the areas would benefit from this tutorial. The expressivity of most Machine Learning languages (attribute-value) is basically equivalent to propositional logic, excluding work on ILP. The second goal of the tutorial is to show how we can go beyond these techniques by extending the expression power of the representation language. This can be done by adding elements programmers typically use, like variables, subroutines, loops, and recursion. This way, more complex problems can be addressed. The tutorial will start with a short overview of the different areas related to Automatic Programming. Most of the tutorial will focus on evolutionary / search-based techniques for generating programs.No publicad

    Inductive logic program synthesis with dialogs

    Get PDF
    DIALOGS (Dialogue-based Inductive and Abductive LOGic program Synthesizer) is a schema-guided synthesizer of recursive logic programs; it takes the initiative and queries a (possibly computationally naive) specifier for evidence in her/his conceptual language. The specifier must know the answers to such simple queries, because otherwise s/he wouldn't even feel the need for the synthesized program. DIALOGS call be used by any learner (including itself) that detects, or merely conjectures, the necessity of invention of a new predicate. Due to its foundation on a powerful codification of a “recursion-theory” (by means of the template and constraints of a divide-and-conquer schema), DIALOGS needs very little evidence and is very fast. © Springer-Vertag Berlin Heidelberg 1997

    FFNSL: feed-forward neural-symbolic learner

    Get PDF
    Logic-based machine learning aims to learn general, interpretable knowledge in a data-efficient manner. However, labelled data must be specified in a structured logical form. To address this limitation, we propose a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FFNSL), that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data. We demonstrate the generality of FFNSL on four neural-symbolic classification problems, where different pre-trained neural network models and logic-based machine learning systems are integrated to learn interpretable knowledge from sequences of images. We evaluate the robustness of our framework by using images subject to distributional shifts, for which the pre-trained neural networks may predict incorrectly and with high confidence. We analyse the impact that these shifts have on the accuracy of the learned knowledge and run-time performance, comparing FFNSL to tree-based and pure neural approaches. Our experimental results show that FFNSL outperforms the baselines by learning more accurate and interpretable knowledge with fewer examples

    Efficient instance and hypothesis space revision in Meta-Interpretive Learning

    Get PDF
    Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL. First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces

    Constructive approaches to Program Induction

    Get PDF
    Search is a key technique in artificial intelligence, machine learning and Program Induction. No matter how efficient a search procedure, there exist spaces that are too large to search effectively and they include the search space of programs. In this dissertation we show that in the context of logic-program induction (Inductive Logic Programming, or ILP) it is not necessary to search for a correct program, because if one exists, there also exists a unique object that is the most general correct program, and that can be constructed directly, without a search, in polynomial time and from a polynomial number of examples. The existence of this unique object, that we term the Top Program because of its maximal generality, does not so much solve the problem of searching a large program search space, as it completely sidesteps it, thus improving the efficiency of the learning task by orders of magnitude commensurate with the complexity of a program space search. The existence of a unique Top Program and the ability to construct it given finite resources relies on the imposition, on the language of hypotheses, from which programs are constructed, of a strong inductive bias with relevance to the learning task. In common practice, in machine learning, Program Induction and ILP, such relevant inductive bias is selected, or created, manually, by the human user of a learning system, with intuition or knowledge of the problem domain, and in the form of various kinds of program templates. In this dissertation we show that by abandoning the reliance on such extra-logical devices as program templates, and instead defining inductive bias exclusively as First- and Higher-Order Logic formulae, it is possible to learn inductive bias itself from examples, automatically, and efficiently, by Higher-Order Top Program construction. In Chapter 4 we describe the Top Program in the context of the Meta-Interpretive Learning approach to ILP (MIL) and describe an algorithm for its construction, the Top Program Construction algorithm (TPC). We prove the efficiency and accuracy of TPC and describe its implementation in a new MIL system called Louise. We support theoretical results with experiments comparing Louise to the state-of-the-art, search-based MIL system, Metagol, and find that Louise improves Metagol’s efficiency and accuracy. In Chapter 5 we re-frame MIL as specialisation of metarules, Second-Order clauses used as inductive bias in MIL, and prove that problem-specific metarules can be derived by specialisation of maximally general metarules, by MIL. We describe a sub-system of Louise, called TOIL, that learns new metarules by MIL and demonstrate empirically that the metarules learned by TOIL match those selected manually, while maintaining the accuracy and efficiency of learning. iOpen Acces

    Inductive learning of answer set programs

    Get PDF
    The goal of Inductive Logic Programming (ILP) is to find a hypothesis that explains a set of examples in the context of some pre-existing background knowledge. Until recently, most research on ILP targeted learning definite logic programs. This thesis constitutes the first comprehensive work on learning answer set programs, introducing new learning frameworks, theoretical results on the complexity and generality of these frameworks, algorithms for learning ASP programs, and an extensive evaluation of these algorithms. Although there is previous work on learning ASP programs, existing learning frameworks are either brave -- where examples should be explained by at least one answer set -- or cautious where examples should be explained by all answer sets. There are cases where brave induction is too weak and cautious induction is too strong. Our proposed frameworks combine brave and cautious learning and can learn ASP programs containing choice rules and constraints. Many applications of ASP use weak constraints to express a preference ordering over the answer sets of a program. Learning weak constraints corresponds to preference learning, which we achieve by introducing ordering examples. We then explore the generality of our frameworks, investigating what it means for a framework to be general enough to distinguish one hypothesis from another. We show that our frameworks are more general than both brave and cautious induction. We also present a new family of algorithms, called ILASP (Inductive Learning of Answer Set Programs), which we prove to be sound and complete. This work concerns learning from both non-noisy and noisy examples. In the latter case, ILASP returns a hypothesis that maximises the coverage of examples while minimising the length of the hypothesis. In our evaluation, we show that ILASP scales to tasks with large numbers of examples finding accurate hypotheses even in the presence of high proportions of noisy examples.Open Acces
    corecore