204,245 research outputs found

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    Learning Weak Constraints in Answer Set Programming

    Get PDF
    This paper contributes to the area of inductive logic programming by presenting a new learning framework that allows the learning of weak constraints in Answer Set Programming (ASP). The framework, called Learning from Ordered Answer Sets, generalises our previous work on learning ASP programs without weak constraints, by considering a new notion of examples as ordered pairs of partial answer sets that exemplify which answer sets of a learned hypothesis (together with a given background knowledge) are preferred to others. In this new learning task inductive solutions are searched within a hypothesis space of normal rules, choice rules, and hard and weak constraints. We propose a new algorithm, ILASP2, which is sound and complete with respect to our new learning framework. We investigate its applicability to learning preferences in an interview scheduling problem and also demonstrate that when restricted to the task of learning ASP programs without weak constraints, ILASP2 can be much more efficient than our previously proposed system.Comment: To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    On the Implementation of the Probabilistic Logic Programming Language ProbLog

    Get PDF
    The past few years have seen a surge of interest in the field of probabilistic logic learning and statistical relational learning. In this endeavor, many probabilistic logics have been developed. ProbLog is a recent probabilistic extension of Prolog motivated by the mining of large biological networks. In ProbLog, facts can be labeled with probabilities. These facts are treated as mutually independent random variables that indicate whether these facts belong to a randomly sampled program. Different kinds of queries can be posed to ProbLog programs. We introduce algorithms that allow the efficient execution of these queries, discuss their implementation on top of the YAP-Prolog system, and evaluate their performance in the context of large networks of biological entities.Comment: 28 pages; To appear in Theory and Practice of Logic Programming (TPLP

    A framework for incremental learning of logic programs

    Get PDF
    AbstractIn this paper, a framework for incremental learning is proposed. The predicates already learned are used as background knowledge in learning new predicates in this framework. The programs learned in this way have nice modular structure with conceptually separate components. This modularity gives the advantages of portability, reliability and efficient compilation and execution.Starting with a simple idea of Miyano et al. [21,22] for identifying classes of programs which satisfy the condition that all the terms occurring SLD-derivations starting with a query are no bigger than the terms in the initial query, we identify a reasonably big class of polynomial time learnable logic programs. These programs can be learned from a given sequence of examples and a logic program defining the already known predicates. Our class properly contains the class of innermost simple programs of [32] and the class of hereditary programs of [21,22]. Standard programs for gcd, multiplication, quick-sort, reverse and merge are a few examples of programs that can be handled by our results but not by the earlier results of [21,22, 32]

    Differentiable Logic Machines

    Full text link
    The integration of reasoning, learning, and decision-making is key to build more general AI systems. As a step in this direction, we propose a novel neural-logic architecture that can solve both inductive logic programming (ILP) and deep reinforcement learning (RL) problems. Our architecture defines a restricted but expressive continuous space of first-order logic programs by assigning weights to predicates instead of rules. Therefore, it is fully differentiable and can be efficiently trained with gradient descent. Besides, in the deep RL setting with actor-critic algorithms, we propose a novel efficient critic architecture. Compared to state-of-the-art methods on both ILP and RL problems, our proposition achieves excellent performance, while being able to provide a fully interpretable solution and scaling much better, especially during the testing phase

    Answer Set Programming with External Sources

    Get PDF
    Answer Set Programming (ASP) is a well-known problem solving approach based on nonmonotonic logic programs and efficient solvers. To enable access to external information, HEX-programs extend programs with external atoms, which allow for a bidirectional communication between the logic program and external sources of computation (e.g., description logic reasoners and Web resources). Current solvers evaluate HEX-programs by a translation to ASP itself, in which values of external atoms are guessed and verified after the ordinary answer set computation. This elegant approach does not scale with the number of external accesses in general, in particular in presence of nondeterminism (which is instrumental for ASP). Hence, there is a need for genuine algorithms which handle external atoms as first-class citizens, which is the main focus of this PhD project. In the first phase of the project, state-of-the-art conflict driven algorithms were already integrated into the prototype system dlvhex and extended to external sources. In particular, the evaluation of external sources may trigger a learning procedure, such that the reasoner gets additional information about the internals of external sources. Moreover, problems on the second level of the polynomial hierarchy were addressed by integrating a minimality check, based on unfounded sets. First experimental results show already clear improvements
    corecore