3 research outputs found

    Computing abduction by using TMS with top-down expectation

    Get PDF
    AbstractWe present a method to compute abduction in logic programming. We translate an abductive framework into a normal logic program with integrity constraints and show the correspondence between generalized stable models and stable models for the translation of the abductive framework. Abductive explanations for an observation can be found from the stable models for the translated program by adding a special kind of integrity constraint for the observation. Then, we show a bottom-up procedure to compute stable models for a normal logic program with integrity constraints. The proposed procedure excludes the unnecessary construction of stable models on early stages of the procedure by checking integrity constraints during the construction and by deriving some facts from integrity constraints. Although a bottom-up procedure has the disadvantage of constructing stable models not related to an observation for computing abductive explanations in general, our procedure avoids the disadvantage by expecting which rule should be used for satisfaction of integrity constraints and starting bottom-up computation based on the expectation. This expectation is not only a technique to scope rule selection but also an indispensable part of our stable model construction because the expectation is done for dynamically generated constraints as well as the constraint for the observation

    Inductive logic programming using bounded hypothesis space

    Get PDF
    Inductive Logic Programming (ILP) systems apply inductive learning to an inductive learning task by deriving a hypothesis which explains the given examples. Applying ILP systems to real applications poses many challenges as they require large search space, noise is present in the learning task, and in domains such as software engineering hypotheses are required to satisfy domain specific syntactic constraints. ILP systems use language biases to define the hypothesis space, and learning can be seen as a search within the defined hypothesis space. Past systems apply search heuristics to traverse across a large hypothesis space. This is unsuitable for systems implemented using Answer Set Programming (ASP), for which scalability is a constraint as the hypothesis space will need to be grounded by the ASP solver prior to solving the learning task, making them unable to solve large learning tasks. This work explores how to learn using bounded hypothesis spaces and iterative refinement. Hypotheses that explain all examples are learnt by refining smaller partial hypotheses. This improves the scalability of ASP based systems as the learning task is split into multiple smaller manageable refinement tasks. The thesis presents how syntactic integrity constraints on the hypothesis space can be used to strengthen hypothesis selection criteria, removing hypotheses with undesirable structure. The notion of constraint-driven bias is introduced, where hypotheses are required to be acceptable with respect to the given meta-level integrity constraints. Building upon the ILP system ASPAL, the system RASPAL which learns through iterative hypothesis refinement is implemented. RASPAL's algorithm is proven, under certain assumptions, to be complete and consistent. Both systems have been applied to a case study in learning user's behaviours from data collected from their mobile usage. This demonstrates their capability for learning with noise, and the difference in their efficiency. Constraint-driven bias has been implemented for both systems, and applied to a task in specification revision, and in learning stratified programs.Open Acces
    corecore