4,208 research outputs found

    Inductive Logic Programming in Databases: from Datalog to DL+log

    Full text link
    In this paper we address an issue that has been brought to the attention of the database community with the advent of the Semantic Web, i.e. the issue of how ontologies (and semantics conveyed by them) can help solving typical database problems, through a better understanding of KR aspects related to databases. In particular, we investigate this issue from the ILP perspective by considering two database problems, (i) the definition of views and (ii) the definition of constraints, for a database whose schema is represented also by means of an ontology. Both can be reformulated as ILP problems and can benefit from the expressive and deductive power of the KR framework DL+log. We illustrate the application scenarios by means of examples. Keywords: Inductive Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid Knowledge Representation and Reasoning Systems. Note: To appear in Theory and Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    Probabilistic Constraint Logic Programming

    Full text link
    This paper addresses two central problems for probabilistic processing models: parameter estimation from incomplete data and efficient retrieval of most probable analyses. These questions have been answered satisfactorily only for probabilistic regular and context-free models. We address these problems for a more expressive probabilistic constraint logic programming model. We present a log-linear probability model for probabilistic constraint logic programming. On top of this model we define an algorithm to estimate the parameters and to select the properties of log-linear models from incomplete data. This algorithm is an extension of the improved iterative scaling algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm applies to log-linear models in general and is accompanied with suitable approximation methods when applied to large data spaces. Furthermore, we present an approach for searching for most probable analyses of the probabilistic constraint logic programming model. This method can be applied to the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl

    Information Acquisition and Refunds for Returns

    Get PDF
    A product exhibits personal fit uncertainty when its consumers have idiosyncratic and uncertain values for it. Often a consumer can learn her long-run value quickly by obtaining the good for a trial period. Money back guarantees of satisfaction are commonly used to lower the cost to consumers of learning their values this way. Increasingly, however, consumers can instead learn about their values before they purchase by, e.g., reading product reviews or consulting experts. We study the effect on a firm’s optimal price and refund of this competing source of information. An efficient outcome would be achieved by setting the refund for a return equal to its salvage value. But a monopoly will, for some parameters, induce consumers to stay uninformed by promising a refund that is greater than the salvage value. This generates an inefficiently large number of returns, which the firm finds worthwhile in order to eliminate the information rents that consumers would obtain by becoming informed. This finding is consistent with the observation that for many products, money back guarantees are generous, as they commonly refund the entire, or almost the entire, purchase price of a product.information acquisition, refunds, money back guarantees, personal fit uncertainty

    Approaching the Problem of Time with a Combined Semiclassical-Records-Histories Scheme

    Full text link
    I approach the Problem of Time and other foundations of Quantum Cosmology using a combined histories, timeless and semiclassical approach. This approach is along the lines pursued by Halliwell. It involves the timeless probabilities for dynamical trajectories entering regions of configuration space, which are computed within the semiclassical regime. Moreover, the objects that Halliwell uses in this approach commute with the Hamiltonian constraint, H. This approach has not hitherto been considered for models that also possess nontrivial linear constraints, Lin. This paper carries this out for some concrete relational particle models (RPM's). If there is also commutation with Lin - the Kuchar observables condition - the constructed objects are Dirac observables. Moreover, this paper shows that the problem of Kuchar observables is explicitly resolved for 1- and 2-d RPM's. Then as a first route to Halliwell's approach for nontrivial linear constraints that is also a construction of Dirac observables, I consider theories for which Kuchar observables are formally known, giving the relational triangle as an example. As a second route, I apply an indirect method that generalizes both group-averaging and Barbour's best matching. For conceptual clarity, my study involves the simpler case of Halliwell 2003 sharp-edged window function. I leave the elsewise-improved softened case of Halliwell 2009 for a subsequent Paper II. Finally, I provide comments on Halliwell's approach and how well it fares as regards the various facets of the Problem of Time and as an implementation of QM propositions.Comment: An improved version of the text, and with various further references. 25 pages, 4 figure

    E-Generalization Using Grammars

    Full text link
    We extend the notion of anti-unification to cover equational theories and present a method based on regular tree grammars to compute a finite representation of E-generalization sets. We present a framework to combine Inductive Logic Programming and E-generalization that includes an extension of Plotkin's lgg theorem to the equational case. We demonstrate the potential power of E-generalization by three example applications: computation of suggestions for auxiliary lemmas in equational inductive proofs, computation of construction laws for given term sequences, and learning of screen editor command sequences.Comment: 49 pages, 16 figures, author address given in header is meanwhile outdated, full version of an article in the "Artificial Intelligence Journal", appeared as technical report in 2003. An open-source C implementation and some examples are found at the Ancillary file

    Acquiring Word-Meaning Mappings for Natural Language Interfaces

    Full text link
    This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance
    corecore