3,901 research outputs found

    Towards an Indexical Model of Situated Language Comprehension for Cognitive Agents in Physical Worlds

    Full text link
    We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources, including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction.Comment: Advances in Cognitive Systems 3 (2014

    Segment Grammar: A formalism for incremental sentence generation

    No full text
    Incremental sentence generation imposes special constraints on the representation of the grammar and the design of the formulator (the module which is responsible for constructing the syntactic and morphological structure). In the model of natural speech production presented here, a formalism called Segment Grammar is used for the representation of linguistic knowledge. We give a definition of this formalism and present a formulator design which relies on it. Next, we present an object- oriented implementation of Segment Grammar. Finally, we compare Segment Grammar with other formalisms

    Bidirectional grammatical encoding using synchronous tree adjoining grammar

    Get PDF

    More is more in language learning:reconsidering the less-is-more hypothesis

    Get PDF
    The Less-is-More hypothesis was proposed to explain age-of-acquisition effects in first language (L1) acquisition and second language (L2) attainment. We scrutinize different renditions of the hypothesis by examining how learning outcomes are affected by (1) limited cognitive capacity, (2) reduced interference resulting from less prior knowledge, and (3) simplified language input. While there is little-to-no evidence of benefits of limited cognitive capacity, there is ample support for a More-is-More account linking enhanced capacity with better L1- and L2-learning outcomes, and reduced capacity with childhood language disorders. Instead, reduced prior knowledge (relative to adults) may afford children with greater flexibility in inductive inference; this contradicts the idea that children benefit from a more constrained hypothesis space. Finally, studies of childdirected speech (CDS) confirm benefits from less complex input at early stages, but also emphasize how greater lexical and syntactic complexity of the input confers benefits in L1-attainment

    Completability vs (In)completeness

    Get PDF
    In everyday conversation, no notion of “complete sentence” is required for syntactic licensing. However, so-called “fragmentary”, “incomplete”, and abandoned utterances are problematic for standard formalisms. When contextualised, such data show that (a) non-sentential utterances are adequate to underpin agent coordination, while (b) all linguistic dependencies can be systematically distributed across participants and turns. Standard models have problems accounting for such data because their notions of ‘constituency’ and ‘syntactic domain’ are independent of performance considerations. Concomitantly, we argue that no notion of “full proposition” or encoded speech act is necessary for successful interaction: strings, contents, and joint actions emerge in conversation without any single participant having envisaged in advance the outcome of their own or their interlocutors’ actions. Nonetheless, morphosyntactic and semantic licensing mechanisms need to apply incrementally and subsententially. We argue that, while a representational level of abstract syntax, divorced from conceptual structure and physical action, impedes natural accounts of subsentential coordination phenomena, a view of grammar as a “skill” employing domain-general mechanisms, rather than fixed form-meaning mappings, is needed instead. We provide a sketch of a predictive and incremental architecture (Dynamic Syntax) within which underspecification and time-relative update of meanings and utterances constitute the sole concept of “syntax”

    Incremental Centering and Center Ambiguity

    Full text link
    In this paper, we present a model of anaphor resolution within the framework of the centering model. The consideration of an incremental processing mode introduces the need to manage structural ambiguity at the center level. Hence, the centering framework is further refined to account for local and global parsing ambiguities which propagate up to the level of center representations, yielding moderately adapted data structures for the centering algorithm.Comment: 6 pages, uuencoded gzipped PS file (see also Technical Report at: http://www.coling.uni-freiburg.de/public/papers/cogsci96-center.ps.gz

    Planning ahead: How recent experience with structures and words changes the scope of linguistic planning

    Get PDF
    The scope of linguistic planning, i.e., the amount of linguistic information that speakers prepare in advance for an utterance they are about to produce, is highly variable. Distinguishing between possible sources of this variability provides a way to discriminate between production accounts that assume structurally incremental and lexically incremental sentence planning. Two picture-naming experiments evaluated changes in speakers’ planning scope as a function of experience with message structure, sentence structure, and lexical items. On target trials participants produced sentences beginning with two semantically related or unrelated objects in the same complex noun phrase. To manipulate familiarity with sentence structure, target displays were preceded by prime displays that elicited the same or different sentence structures. To manipulate ease of lexical retrieval, target sentences began either with the higher-frequency or lower-frequency member of each semantic pair. The results show that repetition of sentence structure can extend speakers’ scope of planning from one to two words in a complex noun phrase, as indexed by the presence of semantic interference in structurally primed sentences beginning with easily retrievable words. Changes in planning scope tied to experience with phrasal structures favor production accounts assuming structural planning in early sentence formulation

    A Computational Cognitive Model of Syntactic Priming

    Get PDF
    The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies we show that it accounts for (a) the inverse frequency interaction; (b) the absence of a decay in long-term priming; and (c) the cumulativity of long-term adaptation. The model also explains the lexical boost effect and the fact that it only applies to short-term priming. We also present corpus data that verifies a prediction of the model, i.e., that the lexical boost affects all lexical material, rather than just heads. Keywords: syntactic priming, adaptation, cognitive architectures, ACT-R, categorial grammar, incrementality
    corecore