10,045 research outputs found

    Childrenā€™s preference for HAS and LOCATED relations: A word learning bias for nounā€“noun compounds

    Get PDF
    The present study investigates childrenā€™s bias when interpreting novel nounā€“noun compounds (e.g. kig donka) that refer to combinations of novel objects (kig and donka). More specifically, it investigates childrenā€™s understanding of modifierā€“head relations of the compounds and their preference for HAS or LOCATED relations (e.g. a donka that HAS a kig or a donka that is LOCATED near a kig) rather than a FOR relation (e.g. a donka that is used FOR kigs). In a forced-choice paradigm, two- and three-year-olds preferred interpretations with HAS/LOCATED relations, while five-year-olds and adults showed no preference for either interpretation. We discuss possible explanations\ud for this preference and its relation to another word learning bias that is based on perceptual features of the referent objects, i.e. the shape bias. We argue that children initially focus on a perceptual stability rather than a pure conceptual stability when interpreting the meaning of nouns

    Bidirectional syntactic priming across cognitive domains: from arithmetic to language and back

    Get PDF
    Scheepers et al. (2011) showed that the structure of a correctly solved mathematical equation affects how people subsequently complete sentences containing high vs. low relative-clause attachment ambiguities. Here we investigated whether such effects generalise to different structures and tasks, and importantly, whether they also hold in the reverse direction (i.e., from linguistic to mathematical processing). In a questionnaire-based experiment, participants had to solve structurally left- or right-branching equations (e.g., 5 Ɨ 2 + 7 versus 5 + 2 Ɨ 7) and to provide sensicality ratings for structurally left- or right-branching adjective-noun-noun compounds (e.g., alien monster movie versus lengthy monster movie). In the first version of the experiment, the equations were used as primes and the linguistic expressions as targets (investigating structural priming from maths to language). In the second version, the order was reversed (language-to-maths priming). Both versions of the experiment showed clear structural priming effects, conceptually replicating and extending the findings from Scheepers et al. (2011). Most crucially, the observed bi-directionality of cross-domain structural priming strongly supports the notion of shared syntactic representations (or recursive procedures to generate and parse them) between arithmetic and language

    What's in a compound? Review article on Lieber and Å tekauer (eds) 2009. 'The Oxford Handbook of Compounding'

    Get PDF
    The Oxford Handbook of Compounding surveys a variety of theoretical and descriptive issues, presenting overviews of compounding in a number of frameworks and sketches of compounding in a number of languages. Much of the book deals with Germanic nounā€“noun compounding. I take up some of the theoretical questions raised surrounding such constructions, in particular, the notion of attributive modification in noun-headed compounds. I focus on two issues. The first is the semantic relation between the head noun and its nominal modifier. Several authors repeat the argument that there is a small(-ish) fixed number of general semantic relations in nounā€“noun compounds (ā€˜Lees's solutionā€™), but I argue that the correct way to look at such compounds is what I call ā€˜Downing's solutionā€™, in which we assume that the relation is specified pragmatically, and hence could be any relation at all. The second issue is the way that adjectives modify nouns inside compounds. Although there are languages in which compounded adjectives modify just as they do in phrases (Chukchee, Arleplog Swedish), in general the adjective has a classifier role and not that of a compositional attributive modifier. Thus, even if an English (or German) adjectiveā€“noun compound looks compositional, it isn't

    A probabilistic framework for analysing the compositionality of conceptual combinations

    Get PDF
    Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilised in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. Additionally, the principle of semantic compositionality is underspecified, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Compositionality is equated with a joint probability distribution modeling how the constituent concepts in the combination are interpreted. Marginal selectivity is introduced as a pivotal probabilistic constraint for the application of the Bell/CH and CHSH systems of inequalities. Non-compositionality is equated with a failure of marginal selectivity, or violation of either system of inequalities in the presence of marginal selectivity. This means that the conceptual combination cannot be modeled in a joint probability distribution, the variables of which correspond to how the constituent concepts are being interpreted. The formal analysis methods are demonstrated by applying them to an empirical illustration of twenty-four non-lexicalised conceptual combinations

    A comparison of parsing technologies for the biomedical domain

    Get PDF
    This paper reports on a number of experiments which are designed to investigate the extent to which current nlp resources are able to syntactically and semantically analyse biomedical text. We address two tasks: parsing a real corpus with a hand-built widecoverage grammar, producing both syntactic analyses and logical forms; and automatically computing the interpretation of compound nouns where the head is a nominalisation (e.g., hospital arrival means an arrival at hospital, while patient arrival means an arrival of a patient). For the former task we demonstrate that exible and yet constrained `preprocessing ' techniques are crucial to success: these enable us to use part-of-speech tags to overcome inadequate lexical coverage, and to `package up' complex technical expressions prior to parsing so that they are blocked from creating misleading amounts of syntactic complexity. We argue that the xml-processing paradigm is ideally suited for automatically preparing the corpus for parsing. For the latter task, we compute interpretations of the compounds by exploiting surface cues and meaning paraphrases, which in turn are extracted from the parsed corpus. This provides an empirical setting in which we can compare the utility of a comparatively deep parser vs. a shallow one, exploring the trade-o between resolving attachment ambiguities on the one hand and generating errors in the parses on the other. We demonstrate that a model of the meaning of compound nominalisations is achievable with the aid of current broad-coverage parsers

    The linguistics of gender

    Get PDF
    This chapter explores grammatical gender as a linguistic phenomenon. First, I define gender in terms of agreement, and look at the parts of speech that can take gender agreement. Because it relates to assumptions underlying much psycholinguistic gender research, I also examine the reasons why gender systems are thought to emerge, change, and disappear. Then, I describe the gender system of Dutch. The frequent confusion about the number of genders in Dutch will be resolved by looking at the history of the system, and the role of pronominal reference therein. In addition, I report on three lexical- statistical analyses of the distribution of genders in the language. After having dealt with Dutch, I look at whether the genders of Dutch and other languages are more or less randomly assigned, or whether there is some system to it. In contrast to what many people think, regularities do indeed exist. Native speakers could in principle exploit such regularities to compute rather than memorize gender, at least in part. Although this should be taken into account as a possibility, I will also argue that it is by no means a necessary implication

    MultiMWE: building a multi-lingual multi-word expression (MWE) parallel corpora

    Get PDF
    Multi-word expressions (MWEs) are a hot topic in research in natural language processing (NLP), including topics such as MWE detection, MWE decomposition, and research investigating the exploitation of MWEs in other NLP fields such as Machine Translation. However, the availability of bilingual or multi-lingual MWE corpora is very limited. The only bilingual MWE corpora that we are aware of is from the PARSEME (PARSing and Multi-word Expressions) EU project. This is a small collection of only 871 pairs of English-German MWEs. In this paper, we present multi-lingual and bilingual MWE corpora that we have extracted from root parallel corpora. Our collections are 3,159,226 and 143,042 bilingual MWE pairs for German-English and Chinese-English respectively after filtering. We examine the quality of these extracted bilingual MWEs in MT experiments. Our initial experiments applying MWEs in MT show improved translation performances on MWE terms in qualitative analysis and better general evaluation scores in quantitative analysis, on both German-English and Chinese-English language pairs. We follow a standard experimental pipeline to create our MultiMWE corpora which are available online. Researchers can use this free corpus for their own models or use them in a knowledge base as model features
    • ā€¦
    corecore