38,486 research outputs found

    Learning Weak Constraints in Answer Set Programming

    Get PDF
    This paper contributes to the area of inductive logic programming by presenting a new learning framework that allows the learning of weak constraints in Answer Set Programming (ASP). The framework, called Learning from Ordered Answer Sets, generalises our previous work on learning ASP programs without weak constraints, by considering a new notion of examples as ordered pairs of partial answer sets that exemplify which answer sets of a learned hypothesis (together with a given background knowledge) are preferred to others. In this new learning task inductive solutions are searched within a hypothesis space of normal rules, choice rules, and hard and weak constraints. We propose a new algorithm, ILASP2, which is sound and complete with respect to our new learning framework. We investigate its applicability to learning preferences in an interview scheduling problem and also demonstrate that when restricted to the task of learning ASP programs without weak constraints, ILASP2 can be much more efficient than our previously proposed system.Comment: To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    Inductive logic programming at 30

    Full text link
    Inductive logic programming (ILP) is a form of logic-based machine learning. The goal of ILP is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we survey recent work in the field. In this survey, we focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs that generalise from few examples, (iii) new approaches for predicate invention, and (iv) the use of different technologies, notably answer set programming and neural networks. We conclude by discussing some of the current limitations of ILP and discuss directions for future research.Comment: Extension of IJCAI20 survey paper. arXiv admin note: substantial text overlap with arXiv:2002.11002, arXiv:2008.0791

    The Learning-Knowledge-Reasoning Paradigm for Natural Language Understanding and Question Answering

    Get PDF
    Given a text, several questions can be asked. For some of these questions, the answer can be directly looked up from the text. However for several other questions, one might need to use additional knowledge and sophisticated reasoning to find the answer. Developing AI agents that can answer these kinds of questions and can also justify their answer is the focus of this research. Towards this goal, we use the language of Answer Set Programming as the knowledge representation and reasoning language for the agent. The question then arises, is how to obtain the additional knowledge? In this work we show that using existing Natural Language Processing parsers and a scalable Inductive Logic Programming algorithm it is possible to learn this additional knowledge (containing mostly commonsense knowledge) from question-answering datasets which then can be used for inference

    The complexity and generality of learning answer set programs

    No full text
    Traditionally most of the work in the field of Inductive Logic Programming (ILP) has addressed the problem of learning Prolog programs. On the other hand, Answer Set Programming is increasingly being used as a powerful language for knowledge representation and reasoning, and is also gaining increasing attention in industry. Consequently, the research activity in ILP has widened to the area of Answer Set Programming, witnessing the proposal of several new learning frameworks that have extended ILP to learning answer set programs. In this paper, we investigate the theoretical properties of these existing frameworks for learning programs under the answer set semantics. Specifically, we present a detailed analysis of the computational complexity of each of these frameworks with respect to the two decision problems of deciding whether a hypothesis is a solution of a learning task and deciding whether a learning task has any solutions. We introduce a new notion of generality of a learning framework, which enables us to define a framework to be more general than another in terms of being able to distinguish one ASP hypothesis solution from a set of incorrect ASP programs. Based on this notion, we formally prove a generality relation over the set of existing frameworks for learning programs under answer set semantics. In particular, we show that our recently proposed framework, Context-dependent Learning from Ordered Answer Sets, is more general than brave induction, induction of stable models, and cautious induction, and maintains the same complexity as cautious induction, which has the highest complexity of these frameworks
    • ā€¦
    corecore