23,469 research outputs found

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Preliminary experiments on human sensitivity to rhythmic structure in a grammar with recursive self-similarity

    Get PDF
    We present the first rhythm detection experiment using a Lindenmayer grammar, a self-similar recursive grammar shown previously to be learnable by adults using speech stimuli. Results show that learners were unable to correctly accept or reject grammatical and ungrammatical strings at the group level, although five (of 40) participants were able to do so with detailed instructions before the exposure phase

    The Effect of Inquiry Learning on the Academic Achievement and Bilingual Verbal Cognition of Young Bilingual Students

    Get PDF
    The issues that prompt this study are based on current research indicating the positive effects of inquiry learning on the cognitive development of children. The purpose of this case study was to understand the effects of inquiry learning on the academic achievement and bilingual verbal cognition of 5th grade bilingual students in a French/English dual immersion program. The treatment group of students completed research projects through a guided inquiry learning approach, while the control group experienced the traditional problem-solving research approach. Empirical findings reported a significant mean increase in mathematics achievement, bilingual verbal cognitive ability, higher motivation to learn and increased self-efficacy in the treatment versus the control group of students

    Learning Domain-Specific Word Embeddings from Sparse Cybersecurity Texts

    Full text link
    Word embedding is a Natural Language Processing (NLP) technique that automatically maps words from a vocabulary to vectors of real numbers in an embedding space. It has been widely used in recent years to boost the performance of a vari-ety of NLP tasks such as Named Entity Recognition, Syntac-tic Parsing and Sentiment Analysis. Classic word embedding methods such as Word2Vec and GloVe work well when they are given a large text corpus. When the input texts are sparse as in many specialized domains (e.g., cybersecurity), these methods often fail to produce high-quality vectors. In this pa-per, we describe a novel method to train domain-specificword embeddings from sparse texts. In addition to domain texts, our method also leverages diverse types of domain knowledge such as domain vocabulary and semantic relations. Specifi-cally, we first propose a general framework to encode diverse types of domain knowledge as text annotations. Then we de-velop a novel Word Annotation Embedding (WAE) algorithm to incorporate diverse types of text annotations in word em-bedding. We have evaluated our method on two cybersecurity text corpora: a malware description corpus and a Common Vulnerability and Exposure (CVE) corpus. Our evaluation re-sults have demonstrated the effectiveness of our method in learning domain-specific word embeddings

    Carnap: an Open Framework for Formal Reasoning in the Browser

    Get PDF
    This paper presents an overview of Carnap, a free and open framework for the development of formal reasoning applications. Carnap’s design emphasizes flexibility, extensibility, and rapid prototyping. Carnap-based applications are written in Haskell, but can be compiled to JavaScript to run in standard web browsers. This combination of features makes Carnap ideally suited for educational applications, where ease-of-use is crucial for students and adaptability to different teaching strategies and classroom needs is crucial for instructors. The paper describes Carnap’s implementation, along with its current and projected pedagogical applications
    • …
    corecore