13 research outputs found

    Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs

    Full text link
    This paper presents a method for inducing logic programs from examples that learns a new class of concepts called first-order decision lists, defined as ordered lists of clauses each ending in a cut. The method, called FOIDL, is based on FOIL (Quinlan, 1990) but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with specific exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate. FOIDL is able to learn concise, accurate programs for this problem from significantly fewer examples than previous methods (both connectionist and symbolic).Comment: See http://www.jair.org/ for any accompanying file

    Prediction-hardness of acyclic conjunctive queries

    Get PDF
    AbstractA conjunctive query problem is a problem to determine whether or not a tuple belongs to the answer of a conjunctive query over a database. In this paper, a tuple, a conjunctive query and a database in relational database theory are regarded as a ground atom, a nonrecursive function-free definite clause and a finite set of ground atoms, respectively, in inductive logic programming terminology. An acyclic conjunctive query problem is a conjunctive query problem with acyclicity. Concerned with the acyclic conjunctive query problem, in this paper, we present the hardness results of predicting acyclic conjunctive queries from an instance with a j-database of which predicate symbol is at most j-ary. Also we deal with two kinds of instances, a simple instance as a set of ground atoms and an extended instance as a set of pairs of a ground atom and a description. We mainly show that, from both a simple and an extended instances, acyclic conjunctive queries are not polynomial-time predictable with j-databases (j⩾3) under the cryptographic assumptions, and predicting acyclic conjunctive queries with 2-databases is as hard as predicting DNF formulas. Hence, the acyclic conjunctive queries become a natural example that the equivalence between subsumption-efficiency and efficient pac-learnability from both a simple and an extended instances collapses

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    A framework for incremental learning of logic programs

    Get PDF
    AbstractIn this paper, a framework for incremental learning is proposed. The predicates already learned are used as background knowledge in learning new predicates in this framework. The programs learned in this way have nice modular structure with conceptually separate components. This modularity gives the advantages of portability, reliability and efficient compilation and execution.Starting with a simple idea of Miyano et al. [21,22] for identifying classes of programs which satisfy the condition that all the terms occurring SLD-derivations starting with a query are no bigger than the terms in the initial query, we identify a reasonably big class of polynomial time learnable logic programs. These programs can be learned from a given sequence of examples and a logic program defining the already known predicates. Our class properly contains the class of innermost simple programs of [32] and the class of hereditary programs of [21,22]. Standard programs for gcd, multiplication, quick-sort, reverse and merge are a few examples of programs that can be handled by our results but not by the earlier results of [21,22, 32]

    Pac-learning Recursive Logic Programs: Negative Results

    Full text link
    In a companion paper it was shown that the class of constant-depth determinate k-ary recursive clauses is efficiently learnable. In this paper we present negative results showing that any natural generalization of this class is hard to learn in Valiant's model of pac-learnability. In particular, we show that the following program classes are cryptographically hard to learn: programs with an unbounded number of constant-depth linear recursive clauses; programs with one constant-depth determinate clause containing an unbounded number of recursive calls; and programs with one linear recursive clause of constant locality. These results immediately imply the non-learnability of any more general class of programs. We also show that learning a constant-depth determinate program with either two linear recursive clauses or one linear recursive clause and one non-recursive clause is as hard as learning boolean DNF. Together with positive results from the companion paper, these negative results establish a boundary of efficient learnability for recursive function-free clauses.Comment: See http://www.jair.org/ for any accompanying file

    Maschinelles Lernen

    Get PDF
    This report gives an overview of machine learning. The report concentrates on methods rather than on the large number of systems. The logic-based approaches are described in some detail. The main paradigms are indicated and used for presenting practical techniques in a unified way. The paper is written in German

    Datengesteuertes Lernen von syntaktischen Einschränkungen des Hypothesenraumes für modellbasiertes Lernen

    Get PDF
    Lernverfahren für prädikatenlogische Formalismen eignen sich als Werkzeuge, die den Aufbau und die Wartung komplexer Sachbereichstheorien unterstützen, da sie sowohl Hintergrundwissen in den Lernvorgang einbeziehen als auch relationale Beziehungen zwischen den Objekten der Theorie behandeln können. Die im Vergleich zu klassischen, auf Aussagenlogik basierenden Verfahren erweiterte Ausdrucksstärke führt aber auch zu einer grösseren Komplexität der Lernaufgabe. Das induktive Lernverfahren RDT der Werkbank MOBAL verwendet Modellwissen in Form von Regelmodellen um den Suchraum einzuschränken. Diese syntaktischen Vorgaben an das Lernziel ermöglichen zwar eine genaue Steuerung der Lernaufgabe durch den Benutzer, fehlen aber die zum Lernziel korrespondierenden Formelschemata, kann das Lernziel nicht erreicht werden. Die vorliegende Arbeit präsentiert daher einen heuristischen Ansatz zum automatischen Erwerb von Regelmodellen, der auf der Berechnung speziellster Generalisierungen beruht. Um Hintergrundwissen zu berücksichtigen, werden die für das Lernziel relevanten Teile dieses Wissens mit den Beispielen verknüpft. Die Berechnung speziellster Generalisierungen von Regelmodellen dient zur schrittweisen Verallgemeinerung der Regelmodelle. Eine neue Erweiterung der theta-Subsumtion auf Regelmodelle und ein Redundanzbegriff für solche Formelschemata sind weitere Bestandteile dieser Arbeit. The paper is written in German
    corecore