270 research outputs found

    Learnability of solutions to conjunctive queries

    Get PDF
    The problem of learning the solution space of an unknown formula has been studied inmultiple embodiments in computational learning theory. In this article, we study a familyof such learning problems; this family contains, for each relational structure, the problem oflearning the solution space of an unknown conjunctive query evaluated on the structure. Aprogression of results aimed to classify the learnability of each of the problems in this family,and thus far a culmination thereof was a positive learnability result generalizing all previousones. This article completes the classification program towards which this progression ofresults strived, by presenting a negative learnability result that complements the mentionedpositive learnability result. In addition, a further negative learnability result is exhibited,which indicates a dichotomy within the problems to which the first negative result applies.In order to obtain our negative results, we make use of universal-algebraic concepts

    On the non-efficient PAC learnability of conjunctive queries

    Get PDF
    This note serves three purposes: (i) we provide a self-contained exposition of the fact that conjunctive queries are not efficiently learnable in the Probably-Approximately-Correct (PAC) model, paying clear attention to the complicating fact that this concept class lacks the polynomial-size fitting property, a property that is tacitly assumed in much of the computational learning theory literature; (ii) we establish a strong negative PAC learnability result that applies to many restricted classes of conjunctive queries (CQs), including acyclic CQs for a wide range of notions of acyclicity; (iii) we show that CQs (and UCQs) are efficiently PAC learnable with membership queries.<p/

    Inductive Logic Programming in Databases: from Datalog to DL+log

    Full text link
    In this paper we address an issue that has been brought to the attention of the database community with the advent of the Semantic Web, i.e. the issue of how ontologies (and semantics conveyed by them) can help solving typical database problems, through a better understanding of KR aspects related to databases. In particular, we investigate this issue from the ILP perspective by considering two database problems, (i) the definition of views and (ii) the definition of constraints, for a database whose schema is represented also by means of an ontology. Both can be reformulated as ILP problems and can benefit from the expressive and deductive power of the KR framework DL+log. We illustrate the application scenarios by means of examples. Keywords: Inductive Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid Knowledge Representation and Reasoning Systems. Note: To appear in Theory and Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables

    Using Contextual Representations to Efficiently Learn Context-Free Languages

    No full text
    International audienceWe present a polynomial update time algorithm for the inductive inference of a large class of context-free languages using the paradigm of positive data and a membership oracle. We achieve this result by moving to a novel representation, called Contextual Binary Feature Grammars (CBFGs), which are capable of representing richly structured context-free languages as well as some context sensitive languages. These representations explicitly model the lattice structure of the distribution of a set of substrings and can be inferred using a generalisation of distributional learning. This formalism is an attempt to bridge the gap between simple learnable classes and the sorts of highly expressive representations necessary for linguistic representation: it allows the learnability of a large class of context-free languages, that includes all regular languages and those context-free languages that satisfy two simple constraints. The formalism and the algorithm seem well suited to natural language and in particular to the modeling of first language acquisition. Preliminary experimental results confirm the effectiveness of this approach

    Learning Possibilistic Logic Theories

    Get PDF
    Vi tar opp problemet med å lære tolkbare maskinlæringsmodeller fra usikker og manglende informasjon. Vi utvikler først en ny dyplæringsarkitektur, RIDDLE: Rule InDuction with Deep LEarning (regelinduksjon med dyp læring), basert på egenskapene til mulighetsteori. Med eksperimentelle resultater og sammenligning med FURIA, en eksisterende moderne metode for regelinduksjon, er RIDDLE en lovende regelinduksjonsalgoritme for å finne regler fra data. Deretter undersøker vi læringsoppgaven formelt ved å identifisere regler med konfidensgrad knyttet til dem i exact learning-modellen. Vi definerer formelt teoretiske rammer og viser forhold som må holde for å garantere at en læringsalgoritme vil identifisere reglene som holder i et domene. Til slutt utvikler vi en algoritme som lærer regler med tilhørende konfidensverdier i exact learning-modellen. Vi foreslår også en teknikk for å simulere spørringer i exact learning-modellen fra data. Eksperimenter viser oppmuntrende resultater for å lære et sett med regler som tilnærmer reglene som er kodet i data.We address the problem of learning interpretable machine learning models from uncertain and missing information. We first develop a novel deep learning architecture, named RIDDLE (Rule InDuction with Deep LEarning), based on properties of possibility theory. With experimental results and comparison with FURIA, a state of the art method, RIDDLE is a promising rule induction algorithm for finding rules from data. We then formally investigate the learning task of identifying rules with confidence degree associated to them in the exact learning model. We formally define theoretical frameworks and show conditions that must hold to guarantee that a learning algorithm will identify the rules that hold in a domain. Finally, we develop an algorithm that learns rules with associated confidence values in the exact learning model. We also propose a technique to simulate queries in the exact learning model from data. Experiments show encouraging results to learn a set of rules that approximate rules encoded in data.Doktorgradsavhandlin
    corecore