5,434 research outputs found

    Preliminary experiments on human sensitivity to rhythmic structure in a grammar with recursive self-similarity

    Get PDF
    We present the first rhythm detection experiment using a Lindenmayer grammar, a self-similar recursive grammar shown previously to be learnable by adults using speech stimuli. Results show that learners were unable to correctly accept or reject grammatical and ungrammatical strings at the group level, although five (of 40) participants were able to do so with detailed instructions before the exposure phase

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Blind insight: metacognitive discrimination despite chance task performance

    Get PDF
    Blindsight and other examples of unconscious knowledge and perception demonstrate dissociations between judgment accuracy and metacognition: Studies reveal that participants’ judgment accuracy can be above chance while their confidence ratings fail to discriminate right from wrong answers. Here, we demonstrated the opposite dissociation: a reliable relationship between confidence and judgment accuracy (demonstrating metacognition) despite judgment accuracy being no better than chance. We evaluated the judgments of 450 participants who completed an AGL task. For each trial, participants decided whether a stimulus conformed to a given set of rules and rated their confidence in that judgment. We identified participants who performed at chance on the discrimination task, utilizing a subset of their responses, and then assessed the accuracy and the confidence-accuracy relationship of their remaining responses. Analyses revealed above-chance metacognition among participants who did not exhibit decision accuracy. This important new phenomenon, which we term blind insight, poses critical challenges to prevailing models of metacognition grounded in signal detection theory

    GenERRate: generating errors for use in grammatical error detection

    Get PDF
    This paper explores the issue of automatically generated ungrammatical data and its use in error detection, with a focus on the task of classifying a sentence as grammatical or ungrammatical. We present an error generation tool called GenERRate and show how GenERRate can be used to improve the performance of a classifier on learner data. We describe initial attempts to replicate Cambridge Learner Corpus errors using GenERRate

    Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    Full text link
    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique nn-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have shown that the new models improve model performance.Comment: 17 Page Paper. Self-extracting PostScript Fil

    Production and processing asymmetries in the acquisition of tense morphology by sequential bilingual children

    Get PDF
    This study investigates the production and on-line processing of English tense morphemes by sequential bilingual (L2) Turkish-speaking children with more than three years of exposure to English. Thirty nine 6-9-year-old L2 children and 28 typically developing age-matched monolingual (L1) children were administered the production component for third person –s and past tense of the Test for Early Grammatical Impairment (Rice & Wexler, 1996) and participated in an on-line word-monitoring task involving grammatical and ungrammatical sentences with presence/omission of tense (third person –s, past tense -ed) and non-tense (progressive –ing, possessive ‘s) morphemes. The L2 children’s performance on the on-line task was compared to that of children with Specific Language Impairment (SLI) in Montgomery & Leonard (1998, 2006) to ascertain similarities and differences between the two populations. Results showed that the L2 children were sensitive to the ungrammaticality induced by the omission of tense morphemes, despite variable production. This reinforces the claim about intact underlying syntactic representations in child L2 acquisition despite non target-like production (Haznedar & Schwartz, 1997)

    An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities

    Full text link
    We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.Comment: 45 pages. Slightly shortened version to appear in Computational Linguistics 2
    corecore