161 research outputs found

    Learning algebraic structures from text

    Get PDF
    AbstractThe present work investigates the learnability of classes of substructures of some algebraic structures: submonoids and subgroups of given groups, ideals of given commutative rings, subfields of given vector spaces. The learner sees all positive data but no negative one and converges to a program enumerating or computing the set to be learned. Besides semantical (BC) and syntactical (Ex) convergence also the more restrictive ordinal bounds on the number of mind changes are considered. The following is shown: (a) Learnability depends much on the amount of semantic knowledge given at the synthesis of the learner where this knowledge is represented by programs for the algebraic operations, codes for prominent elements of the algebraic structure (like 0 and 1 fields) and certain parameters (like the dimension of finite-dimensional vector spaces). For several natural examples, good knowledge of the semantics may enable to keep ordinal mind change bounds while restricted knowledge may either allow only BC-convergence or even not permit learnability at all.(b) The class of all ideals of a recursive ring is BC-learnable iff the ring is Noetherian. Furthermore, one has either only a BC-learner outputting enumerable indices or one can already get an Ex-learner converging to decision procedures and respecting an ordinal bound on the number of mind changes. The ring is Artinian iff the ideals can be Ex-learned with a constant bound on the number of mind changes, this constant is the length of the ring. Ex-learnability depends not only on the ring but also on the representation of the ring. Polynomial rings over the field of rationals with n variables have exactly the ordinal mind change bound ωn in the standard representation. Similar results can be established for unars. Noetherian unars with one function can be learned with an ordinal mind change bound aω for some a

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    Preface

    Get PDF

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Logic and Learning

    Get PDF
    The theory of first-order logic - or Model Theory - appears in few studies of learning and scientific discovery. We speculate about the reasons for this omission, and then argue for the utility of Model Theory in the analysis and design of automated systems of scientific discovery. One scientific task is treated from this perspective in detail, namely, concept discovery. Two formal paradigms bearing on this probleni are presented and investigated using the tools of logical theory. One paradigm bears on PAC learning, the other on identification in the limit

    Editors' Introduction to [Algorithmic Learning Theory: 21st International Conference, ALT 2010, Canberra, Australia, October 6-8, 2010. Proceedings]

    No full text
    Learning theory is an active research area that incorporates ideas, problems, and techniques from a wide range of disciplines including statistics, artificial intelligence, information theory, pattern recognition, and theoretical computer science. The research reported at the 21st International Conference on Algorithmic Learning Theory (ALT 2010) ranges over areas such as query models, online learning, inductive inference, boosting, kernel methods, complexity and learning, reinforcement learning, unsupervised learning, grammatical inference, and algorithmic forecasting. In this introduction we give an overview of the five invited talks and the regular contributions of ALT 2010

    Author index volume 261 (2001)

    Get PDF

    Learning SECp Languages from Only Positive Data

    Get PDF
    The eld of Grammatical Inference provides a good theoretical framework for investigating a learning process. Formal results in this eld can be relevant to the question of rst language acquisition. However, Grammatical Inference studies have been focused mainly on mathematical aspects, and have not exploited the linguistic relevance of their results. With this paper, we try to enrich Grammatical Inference studies with ideas from Linguistics. We propose a non-classical mechanism that has relevant linguistic and computational properties, and we study its learnability from positive data
    • …
    corecore