1,137 research outputs found

    Schema Independent Relational Learning

    Full text link
    Learning novel concepts and relations from relational databases is an important problem with many applications in database systems and machine learning. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same data set may be represented under different schemas for various reasons, such as efficiency, data quality, and usability. Unfortunately, the output of current relational learning algorithms tends to vary quite substantially over the choice of schema, both in terms of learning accuracy and efficiency. This variation complicates their off-the-shelf application. In this paper, we introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We study both sample-based learning algorithms, which learn from sets of labeled examples, and query-based algorithms, which learn by asking queries to an oracle. We prove that current relational learning algorithms are generally not schema independent. For query-based learning algorithms we show that the (de) composition transformations influence their query complexity. We propose Castor, a sample-based relational learning algorithm that achieves schema independence by leveraging data dependencies. We support the theoretical results with an empirical study that demonstrates the schema dependence/independence of several algorithms on existing benchmark and real-world datasets under (de) compositions

    Constraints on predicate invention

    Get PDF
    This chapter describes an inductive learning method that derives logic programs and invents predicates when needed. The basic idea is to form the least common anti-instance (LCA) of selected seed examples. If the LCA is too general it forms the starting poínt of a gneral-to-specific search which is guided by various constraints on argument dependencies and critical terms. A distinguishing feature of the method is its ability to introduce new predicates. Predicate invention involves three steps. First, the need for a new predicate is discovered and the arguments of the new predicate are determíned using the same constraints that guide the search. In the second step, instances of the new predicate are abductively inferred. These instances form the input for the last step where the definition of the new predicate is induced by recursively applying the method again. We also outline how such a system could be more tightly integrated with an abductive learning system

    Constrained Query Answering

    Get PDF
    Traditional answering methods evaluate queries only against positive and definite knowledge expressed by means of facts and deduction rules. They do not make use of negative, disjunctive or existential information. Negative or indefinite knowledge is however often available in knowledge base systems, either as design requirements, or as observed properties. Such knowledge can serve to rule out unproductive subexpressions during query answering. In this article, we propose an approach for constraining any conventional query answering procedure with general, possibly negative or indefinite formulas, so as to discard impossible cases and to avoid redundant evaluations. This approach does not impose additional conditions on the positive and definite knowledge, nor does it assume any particular semantics for negation. It adopts that of the conventional query answering procedure it constrains. This is achieved by relying on meta-interpretation for specifying the constraining process. The soundness, completeness, and termination of the underlying query answering procedure are not compromised. Constrained query answering can be applied for answering queries more efficiently as well as for generating more informative, intensional answers

    Limits of Preprocessing

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: This is a slightly longer version of a paper that appeared in the proceedings of AAAI 201

    A Database Interface for Complex Objects

    Get PDF
    We describe a formal design for a logical query language using psi-terms as data structures to interact effectively and efficiently with a relational database. The structure of psi-terms provides an adequate representation for so-called complex objects. They generalize conventional terms used in logic programming: they are typed attributed structures, ordered thanks to a subtype ordering. Unification of psi-terms is an effective means for integrating multiple inheritance and partial information into a deduction process. We define a compact database representation for psi-terms, representing part of the subtyping relation in the database as well. We describe a retrieval algorithm based on an abstract interpretation of the psi-term unification process and prove its formal correctness. This algorithm is efficient in that it incrementally retrieves only additional facts that are actually needed by a query, and never retrieves the same fact twice

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    Inductive Logic Programming in Databases: from Datalog to DL+log

    Full text link
    In this paper we address an issue that has been brought to the attention of the database community with the advent of the Semantic Web, i.e. the issue of how ontologies (and semantics conveyed by them) can help solving typical database problems, through a better understanding of KR aspects related to databases. In particular, we investigate this issue from the ILP perspective by considering two database problems, (i) the definition of views and (ii) the definition of constraints, for a database whose schema is represented also by means of an ontology. Both can be reformulated as ILP problems and can benefit from the expressive and deductive power of the KR framework DL+log. We illustrate the application scenarios by means of examples. Keywords: Inductive Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid Knowledge Representation and Reasoning Systems. Note: To appear in Theory and Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables
    corecore