31,038 research outputs found

    Inductive logic programming using bounded hypothesis space

    Get PDF
    Inductive Logic Programming (ILP) systems apply inductive learning to an inductive learning task by deriving a hypothesis which explains the given examples. Applying ILP systems to real applications poses many challenges as they require large search space, noise is present in the learning task, and in domains such as software engineering hypotheses are required to satisfy domain specific syntactic constraints. ILP systems use language biases to define the hypothesis space, and learning can be seen as a search within the defined hypothesis space. Past systems apply search heuristics to traverse across a large hypothesis space. This is unsuitable for systems implemented using Answer Set Programming (ASP), for which scalability is a constraint as the hypothesis space will need to be grounded by the ASP solver prior to solving the learning task, making them unable to solve large learning tasks. This work explores how to learn using bounded hypothesis spaces and iterative refinement. Hypotheses that explain all examples are learnt by refining smaller partial hypotheses. This improves the scalability of ASP based systems as the learning task is split into multiple smaller manageable refinement tasks. The thesis presents how syntactic integrity constraints on the hypothesis space can be used to strengthen hypothesis selection criteria, removing hypotheses with undesirable structure. The notion of constraint-driven bias is introduced, where hypotheses are required to be acceptable with respect to the given meta-level integrity constraints. Building upon the ILP system ASPAL, the system RASPAL which learns through iterative hypothesis refinement is implemented. RASPAL's algorithm is proven, under certain assumptions, to be complete and consistent. Both systems have been applied to a case study in learning user's behaviours from data collected from their mobile usage. This demonstrates their capability for learning with noise, and the difference in their efficiency. Constraint-driven bias has been implemented for both systems, and applied to a task in specification revision, and in learning stratified programs.Open Acces

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa
    • …
    corecore