18 research outputs found

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    Logical Reduction of Metarules

    Get PDF
    International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Inductive logic programming at 30: a new introduction

    Full text link
    Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and, finally, summarise current limitations and directions for future research.Comment: Paper under revie

    Inteligencia artificial : Aprendizaje computacional

    Get PDF
    Cuando comenzamos el estudio sobre aprendizaje computacional contábamos con investigaciones de diferentes autores, desconociendo si entre ellos existía algún tipo de relación. Luego de leer y analizar cada uno de los papers, descubrimos que existían aspectos que podían ser comparados. Esto nos permitió ver la posibilidad de organizar esta información teniendo en cuenta dichos aspectos. La clasificación elegida para organizar los papers es la clasificación es clasificación basada sobre la estrategia de aprendizaje subyacente, porque esta considera más aspectos de la información provista por una fuente, que es con lo único que cuenta inicialmente quien quiere aplicar aprendizaje. Para cada estrategia de aprendizaje se describen métodos que la implementan, tratando de desarrollar la mayor cantidad posibles de ellos a fines de permitirnos realizar una buena comparación, perdiendo en cada descripción exaustividad. A partir del análisis mencionado, no solo creimos útil organizar la información dispersa, sino también desarrollar una serie de comparaciones que describan diferencias y similitudes entre las distintas formas de desarrollar aprendizaje computacional, considerando esto un aporte necesario. El resultado del análisis descripto anteriormente dio origen al survey de aprendizaje computacional, cuya utilidad es asistir a quien necesita aplicar este tipo de aprendizaje en un dominio particular. Frente a un problema real, el survey intente ayudar a descubrir cual es el método más apropiado para solucionarlo: a través de la comparación entre estrategias que profundizan determinadas características de los problemas que solucionan cada una, se intenta encontrar la más adecuada para resolver dicho problema. Una vez seleccionada una estrategia, se procederá de la misma manera a través de la comparación de métodos que implementan dicha estrategia para seleccionar uno en particular. Este procedimiento no siempre dará como resultado una única alternativa. Para testear la utilidad de survey para solucionar un problema de aprendizaje computacional consideramos necesario aplicarlo sobre un dominio real, concluyendo con un prototipo que demostrará la factibilidad práctica.Tesis digitalizada en SEDICI gracias a la colaboración de la Biblioteca de la Facultad de Informática.Facultad de Ciencias Exacta
    corecore