709 research outputs found

    Feasible Learnability of Formal Grammars and the Theory of Natural Language Acquisition

    Get PDF
    We propose to apply a complexity theoretic notion of feasible learnability called polynomial learnability to the evaluation of grammatical formalisms for linguistic description. Polynomial learnability was originally defined by Valiant in the context of boolean concept learning and subsequently generalized by Blumer et al. to infinitary domains. We give a clear, intuitive exposition of this notion of learnability and what characteristics of a collection of languages may or may not help feasible learnability under this paradigm. In particular, we present a novel, nontrivial constraint on the degree of locality of grammars which allows a rich class of mildly context sensitive languages to be feasibly learnable. We discuss possible implications of this observation to the theory of natural language acquisition

    Polynomial Learnability and Locality of Formal Grammars

    Get PDF
    We apply a complexity theoretic notion of feasible learnability called polynomial learnability to the evaluation of grammatical formalisms for linguistic description. We show that a novel, nontrivial constraint on the degree of locality of grammars allows not only context free languages but also a rich class of mildly context sensitive languages to be polynomially learnable. We discuss possible implications of this result to the theory of natural language acquisition

    On empirical methodology, constraints, and hierarchy in artificial grammar learning

    No full text
    This paper considers the AGL literature from a psycholinguistic perspective. It first presents a taxonomy of the experimental familiarization test procedures used, which is followed by a consideration of shortcomings and potential improvements of the empirical methodology. It then turns to reconsidering the issue of grammar learning from the point of view of acquiring constraints, instead of the traditional AGL approach in terms of acquiring sets of rewrite rules. This is, in particular, a natural way of handling long‐distance dependences. The final section addresses an underdeveloped issue in the AGL literature, namely how to detect latent hierarchical structure in AGL response patterns

    The ‘Shell Game’: Why Children Never Lose

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73236/1/1467-9612.00013.pd

    On the format for parameters

    Get PDF

    Towards a Robuster Interpretive Parsing

    Get PDF
    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter TT is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment

    An Introduction to Grammatical Inference for Linguists

    Get PDF
    This paper is meant to be an introductory guide to Grammatical Inference (GI), i.e., the study of machine learning of formal languages. It is designed for non-specialists in Computer Science, but with a special interest in language learning. It covers basic concepts and models developed in the framework of GI, and tries to point out the relevance of these studies for natural language acquisition

    Under What Conditions Can Recursion be Learned? Effects of Starting Small in Artificial Grammar Learning of Center Embedded Structure

    Get PDF
    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, i.e., the concepts of starting small and less is more (Elman, 1993; Newport, 1990). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right-branching and center-embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N=100). In Experiment 3 and 4, we used a more complex center-embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input ‘grew’ according to structural complexity, compared to when it ‘grew’ according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center-embedded structures when the input is organized according to structural complexity.This research has been supported in part by a grant from the Human Frontiers Science Program (grant RGP0177/2001-B) to MHC, and by the Netherlands Organization for scientific Research (NWO) to FH

    Language acquisition and universal grammar : a survey of recent research

    Get PDF
    openDipartimento di discipline linguistiche, comunicative e dello spettacoloCONSULTABILE PRESSO IL DIPARTIMENT
    • 

    corecore